Serve as the lead modeler for machine learning models and analytic products, from design phase to development, deployment, and integration into larger business processes.. Proven experience turning ambiguous business problems and raw data into rigorous analytic solutions by applying critical thinking and advanced technical & statistical programming techniques.. A deep understanding of the theory and application of a variety of statistical and machine learning methods and algorithms, including optimization under uncertainty, forecasting, time series analysis, and Bayesian methods.. It is the policy of Citizens Bank to provide equal employment and advancement opportunities to all colleagues and applicants for employment without regard to race, color, ethnicity, religion, gender, pregnancy/childbirth, age, national origin, sexual orientation, gender identity or expression, disability or perceived disability, genetic information, citizenship, veteran or military status, marital or domestic partner status, or any other category protected by federal, state and/or local laws. Results of the background check are individually reviewed based upon legal requirements imposed by our regulators and with consideration of the nature and gravity of the background history and the job offered.
The ideal candidate will have a strong background in data engineering, cloud-based solutions, and a proven track record of building and managing scalable data pipelines on AWS. The Data Engineer will work closely with cross-functional teams to develop, maintain, and optimize data solutions that support critical business insights.. 3-5 years of experience in data engineering with a focus on AWS cloud solutions.. Hands-on experience with AWS services like Glue, Lambda, S3, Redshift, and Athena.. Experience with big data tools such as Apache Spark, Hadoop, or EMR.. AWS certifications (e.g., AWS Certified Data Analytics, AWS Certified Solutions Architect).
For one of our ongoing project we are looking for Multi skilled Java developers with Python , Spark, Horton Works / Hadoop,. Standing up cutting-edge analytical capabilities, leveraging automation, cognitive and science-based techniques to manage data and models, and drive operational efficiency by offering continuous insights and improvements.. Help with System Integration, Performance Evaluation and Application scalability and Resource refactoring, based on utilizing a thorough understanding of applicable technology, tools such as Python, Spark, Hadoop, AtScale, Dremio and existing designs.. 3+ years of Hadoop experience (Horton Works preferred). Most importantly, we need resources who are computational/quantitatively experienced.
We at Synergisticit understand the problem of the mismatch between employer's requirements and Employee skills and that's why since 2010 we have helped 1000's of candidates get jobs at technology clients like apple, google, Paypal, western union, Client, visa, walmart labs etc to name a few. We want Data Science/Machine learning/Data Analyst and Java Full stack candidates.. Knowledge of Statistics, Gen AI, LLM, Python, Computer Vision, data visualization tools.. Preferred skills: NLP, Text mining, Tableau, PowerBI, Databricks, Tensorflow.. Spring boot, Microservices, Docker, Jenkins, Github, Kubernates and REST API's experience
At least 4 years of pre-sales experience, including responding to RFPs/RFIs, proactive client engagement, delivering client demos, and preparing presentations.. Proficiency in Python, R, or Scala and ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn, Hugging Face, LangChain).. Experience with big data technologies (e.g., Hadoop, Spark, Snowflake) and databases (SQL, NoSQL).. Deep knowledge of financial services domains, including risk management, fraud detection, customer analytics, and regulatory compliance.. Certifications in cloud platforms (e.g., AWS Certified Machine Learning, Azure AI Engineer) or data science (e.g., TensorFlow Developer Certificate).
🚀 Publish, present, and patent high-impact findings in AI and machine learning.. Conduct cutting-edge research in artificial intelligence, including areas such as natural language processing, computer vision, generative models, and reinforcement learning.. Strong publication record in top-tier conferences (e.g., NeurIPS, ICML, ACL, CVPR, ICLR) is highly preferred.. Expertise in machine learning, deep learning, or statistical modeling.. Experience with model development using TensorFlow, PyTorch, JAX, or similar frameworks
A company at the intersection of AI and Life Sciences is looking to expand their highly successful AI Research team.. The company is based in the San Francisco Bay Area, but the role may be open to fully remote.. Libriaries: PyTorch, Tensorflow, Scikit-learn, Pandas. Neural Networks: Graph Neural Networks, CNN. Protein Language Models: AlphaFold, etc.
Design and maintain fraud rules and scoring logic for transaction monitoring systems. Ensure alignment with regulatory expectations, model risk governance, and internal audit requirements. Explore and implement new technologies (e.g., graph analytics, behavioral biometrics, NLP) for advanced fraud detection. Strong command of Python, R, SQL, and data science libraries (pandas, scikit-learn, TensorFlow, etc.. Exposure to real-time fraud systems (e.g., SAS Fraud Management, Actimize, Falcon, etc.)
Fully Remote (onsite in Vinings, GA from Mon-Thurs if converted to FTE). We are seeking a Senior BI Analyst with expertise in Python, SQL, Tableau, and predictive analytics.. You will work with large datasets across SAP and Snowflake environments to deliver strategic insights, with a strong focus on forecasting container arrivals, optimizing inventory levels, and identifying cost-saving opportunities – all in support of a major ERP modernization and the nationwide consolidation of distribution centers.. System Migration : Lead data analysis and reporting for a major migration from a legacy ERP to SAP Warehouse Management, with reporting transitioning to Snowflake.. Python Predictive Analytics : Perform predictive modeling in Python to forecast container arrivals, inventory levels, and other supply chain KPIs.
Become a part of our caring community and help us put health first. Master's Degree and 3+ years of experience in research/ML engineering or an applied research scientist position preferably with a focus on developing production-ready AI solutions.. Experience with machine learning frameworks like Scikit-Learn, Tensorflow, or Pytorch. At minimum, a download speed of 25 Mbps and an upload speed of 10 Mbps is recommended; wireless, wired cable or DSL connection is suggested. Humana Inc. (NYSE: HUM) is committed to putting health first - for our teammates, our customers and our company.
The role will be focused on data discovery, Artificial Intelligence(AI), Machine Learning(ML), and Natural Language Processing (NLP) efforts.. The Data Scientist I will perform impact assessments and data analysis, support prioritization and reporting of AI/ML/NLP projects, and engage with business and technical stakeholders.. Bachelor degree in Data Science, Engineering, Computer Science, Quantitative Theory and Methods, Statistics, Mathematics, Management Science, Economics, Econometrics, Operational Research, Mathematics joint Political Science, or other similar quantitative discipline.. Natural Language Processing (NLP) experience building and implementing a project/model/application from scratch.. Experience using Python packages such as: pandas, NumPy, scikit-learn, spaCy, NLTK, PyTorch, TensorFlow or other advanced scientific/ML/NLP Python packages
Model Validation : Evaluate model performance using appropriate metrics (e.g., AUC, RMSE) and ensure models meet regulatory and business requirements.. Proficiency in Python or R for data analysis and model development (e.g., scikit-learn, TensorFlow, PyTorch, XGBoost).. Familiarity with data visualization tools (e.g., Tableau, Power BI, Matplotlib, Seaborn).. Knowledge of cloud platforms (e.g., AWS, Azure, GCP) for model deployment and data processing is a plus.. Preferred Qualifications Experience with advanced machine learning techniques, such as deep learning, natural language processing (NLP), or ensemble methods, applied to insurance use cases.
Experience with Advanced LLM concepts like Prompt Compression, Fine-Tuning, Caching, etc.. This position is located in Bridgewater, NJ / Sunnyvale, CA / Austin, TX / Raleigh, NC / Richardson, TX / Tempe, AZ / Phoenix, AZ / Charlotte, NC / Houston, TX /Alpharetta, GA or is willing to relocate.. Preferred Data Scientist Qualifications: 4 years of hands-on experience with more than one programming language; Python, R, Scala, Java, SQL. Proficiency in building RESTful APIs using FastAPI, Flask, or Django.. Exposure to Front -End / Full-Stack Integration (React / Angular, TypeScript, REST APIs, GraphQL, Event-Driven etc.)
Business Segment Overview: With boots on the ground authenticity at the heart of everything we do, our comprehensive array of commercial products and services enable you to work directly with our clients, across hedging, risk management, execution and clearing, OTC products, commodity finance and more.. Responsibilities Position Purpose: We are seeking a highly motivated and talented Machine Learning/Deep Learning researcher to join our growing team on a part time temporary basis.. In this role, you will work closely with the trading desk and play a critical part in developing and deploying cutting-edge machine learning and deep learning models to the world of commodities trading.. Research, design, and develop sophisticated machine learning and deep learning models, including but not limited to CNN, LSTM and transformers models. A self-sufficient modeler with hands-on experience with deep learning models, with application in finance/time series models is a plus.
At least 4 years of pre-sales experience, including responding to RFPs/RFIs, proactive client engagement, delivering client demos, and preparing presentations.. Proficiency in Python, R, or Scala and ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn, Hugging Face, LangChain).. Experience with big data technologies (e.g., Hadoop, Spark, Snowflake) and databases (SQL, NoSQL).. Deep knowledge of financial services domains, including risk management, fraud detection, customer analytics, and regulatory compliance.. Certifications in cloud platforms (e.g., AWS Certified Machine Learning, Azure AI Engineer) or data science (e.g., TensorFlow Developer Certificate).
Applies data science techniques, such as machine learning, statistical modeling and artificial intelligence working closely with senior team members.. Minimum 12 years Advanced Java, R, SQL, Python coding. Minimum 6 years statistical Analysis, Machine Learning, Computer Science, Programming, Data Storytelling. Minimum 6 years big Data technologies such as Spark, AWS, Hadoop including traditional RDBMS such as Oracle and SQL Server.. Specialized health and family planning benefits including fertility benefits, and cancer, diabetes and musculoskeletal support programs
Natural Language Processing (NLP) for unstructured customer feedback. Leverage AWS services (e.g., S3, Glue, Lambda, EMR, Athena, Redshift) for data storage, processing, and orchestration.. Extensive experience with AWS services for data engineering and machine learning (e.g., S3, Glue, Lambda, EMR, Athena, Redshift, SageMaker).. Strong hands-on experience with Snowflake for data warehousing, modeling, and performance optimization.. Familiarity with other big data technologies (e.g., Apache Spark).
AIML/LLM engineer with below skills (SE4). Python, Apache Spark (PySpark), Kubernetes, Django. Apache Kafka for real-time data streaming, and distributed computing frameworks.. Multi-GPU training and distributed computing frameworks such as TensorFlow Distributed, PyTorch Distributed, and Horovod to accelerate AI/ML workloads.. configuring and managing NVIDIA GPU and GCP (TPUs, GPU instances)
As a Director and Lead Machine Learning Engineer within the AIOps organization, you will play a significant role in ensure build reliable, reproducible software applications and standardization of the AIOps pipeline on the cloud.. Hands-on experience with cloud platforms such as Google Cloud, AWS, and Azure.. Development experience for WebService API with AWS, Google Cloud Platform, and Azure suite of tools and their offerings for AI / GenAI. Basic understanding of ML frameworks, i.e., TensorFlow, Anaconda, Scikit-Learn, SageMaker, Agentic AI, and Vertex AI. Familiarity with Python Flask or Spring Boot.
Implement data versioning, lineage, and metadata management.. Cloud & Big Data Technologies: Design AI/ML data solutions on AWS, Azure, or GCP.. Utilize big data technologies such as Apache Spark, Hadoop, Kafka, and Delta Lake.. Expertise in big data technologies (Spark, Kafka, Hadoop, Delta Lake, etc).. Proficiency in cloud platforms (AWS, Azure, or GCP) and data services.