Assess clients' data strategy, systems, and datasets by exploring cloud and on-premise infrastructure and conducting meetings. Bachelor's degree in a Data & Analytics-related field (or equivalent experience), such as Data Science or Data Engineering. Proficiency in data visualization tools such as PowerBI or Tableau. Background in solution architecture, DevOps, or data engineering with platforms like Azure, AWS, or GCP. Experience with databases such as SQL and Spark-based platforms (Databricks, Fabric, Synapse).
Design and implement scalable data pipelines and architectures on Azure Databricks.. Leverage Apache Spark, Delta Lake, and Azure-native services to build high-performance data solutions.. Lead the migration of Azure SQL to Azure Databricks, ensuring a seamless transition of data workloads.. Design and implement scalable data pipelines to extract, transform, and load (ETL/ELT) data from Azure SQL into Databricks Delta Lake. Optimize Azure SQL queries and indexing strategies before migration to enhance performance in Databricks.
This role is not open to visa sponsorship or transfer of visa sponsorship including those on H1-B, F-1, OPT, STEM-OPT, or TN visa, nor is it available to work corp-to-corp.. This role requires a hybrid schedule and will be based in our Fort Mill, SC Headquarters OR NYC office (Tuesday through Thursday) and work fully remotely on Mondays and Fridays each week.. The AI Products team is looking for a Data Scientist to help build the next generation of generative AI solutions for our Home Services partners in the utilities and telecommunications sectors.. This is a hands-on role focused on developing and scaling LLM-powered product features using modern frameworks and deployment practices.. Experience working in cloud environments (AWS preferred), as well as with data and machine learning platforms such as Databricks, MLflow, or similar tools
Experience building enterprise systems especially using Databricks, Snowflake and platforms like Azure, AWS, GCP etc. g., Azure, Databricks, Snowflake) would be a plus. Experience working with Snowflake and/or Microsoft Fabric. Extensive experience working with Databricks and Azure Data Factory for data lake and data warehouse solutions. Hands-on experience with big data technologies (such as Hadoop, Spark, Kafka)
We are seeking a highly skilled and motivated AI/ML Engineer / Python Developer with proven experience in Large Language Models (LLMs) GPU-based computing , and cloud-native architecture on GCP.. Architect and maintain cloud-native applications on Google Cloud Platform (GCP) , including use of TPUs and GPU instances. Build and scale data pipelines with Apache Kafka for real-time data streaming and use Apache Spark (PySpark) for distributed data processing.. Deep expertise in GPU-accelerated training , with proficiency in TensorFlow Distributed PyTorch Distributed , and Horovod Proficiency in Apache Kafka Apache Spark (PySpark) , and Kubernetes Demonstrated experience with GCP services, particularly with TPUs and GPU-enabled compute instances Experience in building and deploying scalable cloud-native architectures and microservices.. Contributions to open-source LLM projects or experience training LLMs from scratch.
Lighthouse Technology Services is partnering with our client to fill their Senior MDM Python (AWS) Developer position!. We're currently a Python and Angular/TypeScript tech stack team and use a range of AWS services like S3, PostgreSQL, DynamoDB, Athena, Snowflake, Lambda, and Glue.. AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Solutions Architect) are highly desirable.. Experience with modern data stack technologies (e.g., dbt, Snowflake, Databricks).. Background in DevOps practices and Infrastructure as Code (IaC) using tools like Terraform or AWS CloudFormation.
Work with AI/ML model APIs and platforms such as OpenAI, Hugging Face, or Google Vertex AI.. Solid hands-on experience with modern front-end frameworks React, Angular, or Vue.js.. Strong back-end development expertise using Node.js, Flask/Django, or Spring.. Familiarity with cloud platforms , especially AWS (e.g., Lambda, S3, SageMaker).. Experience with Docker and Kubernetes for containerization and orchestration.
Minimum 3 years experience with Azure, (Azure Data factory, Azure Synapse, Azure SQL Services). - Expert level understanding of Azure Data factory, Azure Synapse, Azure SQL Services. - Designing & Building of Data pipelines using API, Ingestion and streaming methods. ManpowerGroup® (NYSE: MAN), the leading global workforce solutions company, helps organizations transform in a fast-changing world of work by sourcing, assessing, developing, and managing the talent that enables them to win.. We are recognized consistently for our diversity - as a best place to work for Women, Inclusion, Equality and Disability and in 2022 ManpowerGroup was named one of the World's Most Ethical Companies for the 13th year - all confirming our position as the brand of choice for in-demand talent.
Fully Remote (onsite in Vinings, GA from Mon-Thurs if converted to FTE). We are seeking a Senior BI Analyst with expertise in Python, SQL, Tableau, and predictive analytics.. You will work with large datasets across SAP and Snowflake environments to deliver strategic insights, with a strong focus on forecasting container arrivals, optimizing inventory levels, and identifying cost-saving opportunities – all in support of a major ERP modernization and the nationwide consolidation of distribution centers.. System Migration : Lead data analysis and reporting for a major migration from a legacy ERP to SAP Warehouse Management, with reporting transitioning to Snowflake.. Python Predictive Analytics : Perform predictive modeling in Python to forecast container arrivals, inventory levels, and other supply chain KPIs.
Managing development teams in building of AI and GenAI solutions, including but not limited to analytical modeling, prompt engineering, general all-purpose programming (e.g., Python), testing, communication of results, front end and back-end integration, and iterative development with clients.. , common LLM development frameworks (e.g., Langchain, Semantic Kernel), Relational storage (SQL), Non-relational storage (NoSQL);.. Experience in analytical techniques such as Machine Learning, Deep Learning and Optimization.. Understanding or hands on experience with Azure, AWS, and / or Google Cloud platforms.. For only those qualified applicants that are impacted by the Los Angeles County Fair Chance Ordinance for Employers, the Los Angeles' Fair Chance Initiative for Hiring Ordinance, the San Francisco Fair Chance Ordinance, San Diego County Fair Chance Ordinance, and the California Fair Chance Act, where applicable, arrest or conviction records will be considered for Employment in accordance with these laws.
Within GRM, we have established the Data Strategy & Management (DSM) function.. Under the GRM DSM Executives leadership, the Data Strategy Solutions Architect will evaluate the existing GRM data architecture and design solutions to simplify and optimize the data environment.. Leads strategy & architecture for one or more data management products such as Data Catalog, Data Lineage, Data Feeds Registry, Data Quality and related data services.. Ability to drive data strategy and deep understanding of industry paradigms such as data mesh, data contracts, integration fabric etc.. Hands-on expertise with data technologies and computing frameworks including but not limited to, Python, Spark, Airflow, Javascript and SQL.