Serve as the lead modeler for machine learning models and analytic products, from design phase to development, deployment, and integration into larger business processes.. Proven experience turning ambiguous business problems and raw data into rigorous analytic solutions by applying critical thinking and advanced technical & statistical programming techniques.. A deep understanding of the theory and application of a variety of statistical and machine learning methods and algorithms, including optimization under uncertainty, forecasting, time series analysis, and Bayesian methods.. It is the policy of Citizens Bank to provide equal employment and advancement opportunities to all colleagues and applicants for employment without regard to race, color, ethnicity, religion, gender, pregnancy/childbirth, age, national origin, sexual orientation, gender identity or expression, disability or perceived disability, genetic information, citizenship, veteran or military status, marital or domestic partner status, or any other category protected by federal, state and/or local laws. Results of the background check are individually reviewed based upon legal requirements imposed by our regulators and with consideration of the nature and gravity of the background history and the job offered.
We are seeking a highly skilled and motivated AI/ML Engineer / Python Developer with proven experience in Large Language Models (LLMs) GPU-based computing , and cloud-native architecture on GCP.. Architect and maintain cloud-native applications on Google Cloud Platform (GCP) , including use of TPUs and GPU instances. Build and scale data pipelines with Apache Kafka for real-time data streaming and use Apache Spark (PySpark) for distributed data processing.. Deep expertise in GPU-accelerated training , with proficiency in TensorFlow Distributed PyTorch Distributed , and Horovod Proficiency in Apache Kafka Apache Spark (PySpark) , and Kubernetes Demonstrated experience with GCP services, particularly with TPUs and GPU-enabled compute instances Experience in building and deploying scalable cloud-native architectures and microservices.. Contributions to open-source LLM projects or experience training LLMs from scratch.
Assess clients' data strategy, systems, and datasets by exploring cloud and on-premise infrastructure and conducting meetings. Bachelor's degree in a Data & Analytics-related field (or equivalent experience), such as Data Science or Data Engineering. Proficiency in data visualization tools such as PowerBI or Tableau. Background in solution architecture, DevOps, or data engineering with platforms like Azure, AWS, or GCP. Experience with databases such as SQL and Spark-based platforms (Databricks, Fabric, Synapse).
Design & evolve our cloud lakehouse - Own end-to-end architecture (AWS, Databricks) for both batch and streaming pipelines, balancing cost, performance, and governance.. Engineer reliable data pipelines - Implement Python/Spark jobs, Kafka/Spark Streaming flows, dbt transformations, and Airflow orchestration with CI/CD (GitHub Actions, Terraform).. Data modeling & warehousing - Dimensional, data vault, and real time schemas; SQL performance tuning on data warehouses, including Snowflake and Databricks SQL. Orchestration & DevOps - Airflow, dbt, GitHub Actions, Terraform; strong CI/CD experience. Modern stack - Databricks, Airflow, Kafka, dbt, Monte Carlo, Terraform - no legacy drag
Technology|Big Data - Data Processing|Spark, Technology|Big Data - Hadoop|Hadoop, Technology|Functional Programming|Scala. In this role, y ou will enable digital transformation for our clients in a global delivery model, research on technologies independently, recommend appropriate solutions and contribute to technology-specific best practices and standards.. Candidate must be located within commuting distance of Raleigh, NC or Charlotte, NC or Richardson , TX or be willing to relocate to the areas.. At least 3 years of e xperience working with Apache Spark, Scala, Spark SQL and Starburst. Knowledge of data serialization formats such as Parquet, Avro, or ORC
In this role, you will enable digital transformation for our clients in a global delivery model, research on technologies independently, recommend appropriate solutions and contribute to technology-specific best practices and standards. Candidate must be located within commuting distance of Charlotte, NC or be willing to relocate to the area. Strong knowledge in RESTful APIs, Data structures, Algorithms, Collections, Multi-threading and memory management and concurrency.. Experience in Big data ecosystem using Hadoop, Spark, Scala using Python packages and libraries for large scale data.. We enable clients in more than 50 countries to navigate their digital transformation.
Oversee multiple teams of analysts and senior analysts in the delivery of complex and comprehensive risk reporting, data, business intelligence, and related services.. Identify and resolve technical, operational, risk management, business, and organizational challenges.. Previous experience in banking, with specific emphasis on reporting, business intelligence, systems, technology, data, risk, compliance or related areas. Advanced skills in data wrangling, data engineering, data science, or related areas.. Experience with languages and tools such as Python, SQL, SAS, Qlik, Tableau, etc.
We at Synergisticit understand the problem of the mismatch between employer's requirements and Employee skills and that's why since 2010 we have helped 1000's of candidates get jobs at technology clients like apple, google, Paypal, western union, Client, visa, walmart labs etc to name a few. We want Data Science/Machine learning/Data Analyst and Java Full stack candidates.. Knowledge of Statistics, Gen AI, LLM, Python, Computer Vision, data visualization tools.. Preferred skills: NLP, Text mining, Tableau, PowerBI, Databricks, Tensorflow.. Spring boot, Microservices, Docker, Jenkins, Github, Kubernates and REST API's experience
Design and implement scalable data pipelines and architectures on Azure Databricks.. Leverage Apache Spark, Delta Lake, and Azure-native services to build high-performance data solutions.. Lead the migration of Azure SQL to Azure Databricks, ensuring a seamless transition of data workloads.. Design and implement scalable data pipelines to extract, transform, and load (ETL/ELT) data from Azure SQL into Databricks Delta Lake. Optimize Azure SQL queries and indexing strategies before migration to enhance performance in Databricks.
The ideal candidate will have a strong background in data engineering, cloud-based solutions, and a proven track record of building and managing scalable data pipelines on AWS. The Data Engineer will work closely with cross-functional teams to develop, maintain, and optimize data solutions that support critical business insights.. 3-5 years of experience in data engineering with a focus on AWS cloud solutions.. Hands-on experience with AWS services like Glue, Lambda, S3, Redshift, and Athena.. Experience with big data tools such as Apache Spark, Hadoop, or EMR.. AWS certifications (e.g., AWS Certified Data Analytics, AWS Certified Solutions Architect).
For one of our ongoing project we are looking for Multi skilled Java developers with Python , Spark, Horton Works / Hadoop,. Standing up cutting-edge analytical capabilities, leveraging automation, cognitive and science-based techniques to manage data and models, and drive operational efficiency by offering continuous insights and improvements.. Help with System Integration, Performance Evaluation and Application scalability and Resource refactoring, based on utilizing a thorough understanding of applicable technology, tools such as Python, Spark, Hadoop, AtScale, Dremio and existing designs.. 3+ years of Hadoop experience (Horton Works preferred). Most importantly, we need resources who are computational/quantitatively experienced.
Experience building enterprise systems especially using Databricks, Snowflake and platforms like Azure, AWS, GCP etc. g., Azure, Databricks, Snowflake) would be a plus. Experience working with Snowflake and/or Microsoft Fabric. Extensive experience working with Databricks and Azure Data Factory for data lake and data warehouse solutions. Hands-on experience with big data technologies (such as Hadoop, Spark, Kafka)
Lighthouse Technology Services is partnering with our client to fill their Senior MDM Python (AWS) Developer position!. We're currently a Python and Angular/TypeScript tech stack team and use a range of AWS services like S3, PostgreSQL, DynamoDB, Athena, Snowflake, Lambda, and Glue.. AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Solutions Architect) are highly desirable.. Experience with modern data stack technologies (e.g., dbt, Snowflake, Databricks).. Background in DevOps practices and Infrastructure as Code (IaC) using tools like Terraform or AWS CloudFormation.
Minimum 3 years experience with Azure, (Azure Data factory, Azure Synapse, Azure SQL Services). - Expert level understanding of Azure Data factory, Azure Synapse, Azure SQL Services. - Designing & Building of Data pipelines using API, Ingestion and streaming methods. ManpowerGroup® (NYSE: MAN), the leading global workforce solutions company, helps organizations transform in a fast-changing world of work by sourcing, assessing, developing, and managing the talent that enables them to win.. We are recognized consistently for our diversity - as a best place to work for Women, Inclusion, Equality and Disability and in 2022 ManpowerGroup was named one of the World's Most Ethical Companies for the 13th year - all confirming our position as the brand of choice for in-demand talent.
At least 4 years of pre-sales experience, including responding to RFPs/RFIs, proactive client engagement, delivering client demos, and preparing presentations.. Proficiency in Python, R, or Scala and ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn, Hugging Face, LangChain).. Experience with big data technologies (e.g., Hadoop, Spark, Snowflake) and databases (SQL, NoSQL).. Deep knowledge of financial services domains, including risk management, fraud detection, customer analytics, and regulatory compliance.. Certifications in cloud platforms (e.g., AWS Certified Machine Learning, Azure AI Engineer) or data science (e.g., TensorFlow Developer Certificate).
Managing development teams in building of AI and GenAI solutions, including but not limited to analytical modeling, prompt engineering, general all-purpose programming (e.g., Python), testing, communication of results, front end and back-end integration, and iterative development with clients.. , common LLM development frameworks (e.g., Langchain, Semantic Kernel), Relational storage (SQL), Non-relational storage (NoSQL);.. Experience in analytical techniques such as Machine Learning, Deep Learning and Optimization.. Understanding or hands on experience with Azure, AWS, and / or Google Cloud platforms.. For only those qualified applicants that are impacted by the Los Angeles County Fair Chance Ordinance for Employers, the Los Angeles' Fair Chance Initiative for Hiring Ordinance, the San Francisco Fair Chance Ordinance, San Diego County Fair Chance Ordinance, and the California Fair Chance Act, where applicable, arrest or conviction records will be considered for Employment in accordance with these laws.
Fully Remote (onsite in Vinings, GA from Mon-Thurs if converted to FTE). We are seeking a Senior BI Analyst with expertise in Python, SQL, Tableau, and predictive analytics.. You will work with large datasets across SAP and Snowflake environments to deliver strategic insights, with a strong focus on forecasting container arrivals, optimizing inventory levels, and identifying cost-saving opportunities – all in support of a major ERP modernization and the nationwide consolidation of distribution centers.. System Migration : Lead data analysis and reporting for a major migration from a legacy ERP to SAP Warehouse Management, with reporting transitioning to Snowflake.. Python Predictive Analytics : Perform predictive modeling in Python to forecast container arrivals, inventory levels, and other supply chain KPIs.
A company at the intersection of AI and Life Sciences is looking to expand their highly successful AI Research team.. The company is based in the San Francisco Bay Area, but the role may be open to fully remote.. Libriaries: PyTorch, Tensorflow, Scikit-learn, Pandas. Neural Networks: Graph Neural Networks, CNN. Protein Language Models: AlphaFold, etc.
Design and maintain fraud rules and scoring logic for transaction monitoring systems. Ensure alignment with regulatory expectations, model risk governance, and internal audit requirements. Explore and implement new technologies (e.g., graph analytics, behavioral biometrics, NLP) for advanced fraud detection. Strong command of Python, R, SQL, and data science libraries (pandas, scikit-learn, TensorFlow, etc.. Exposure to real-time fraud systems (e.g., SAS Fraud Management, Actimize, Falcon, etc.)
🚀 Publish, present, and patent high-impact findings in AI and machine learning.. Conduct cutting-edge research in artificial intelligence, including areas such as natural language processing, computer vision, generative models, and reinforcement learning.. Strong publication record in top-tier conferences (e.g., NeurIPS, ICML, ACL, CVPR, ICLR) is highly preferred.. Expertise in machine learning, deep learning, or statistical modeling.. Experience with model development using TensorFlow, PyTorch, JAX, or similar frameworks