Microservices & Cloud Architecture:Design and maintain microservices on AWS Lambda, Docker containers, and Spring Cloud components (Config, Eureka, Gateway).. Data Engineering & Processing:Build and optimize ETL pipelines in Databricks and AWS Glue Jobs. Utilize Apache Commons libraries for performant data transformations.. 6+ years of software development, with at least 4 years focused on Java (Spring Boot, Spring MVC) in microservices environments.. Solid experience with Databricks or equivalent Spark-based platforms for big data processing.. These include Management Consulting, Cloud Architecture, Software Development, Agile Implementation, Data Science and Analytics, Systems Engineering, Augmented Reality, and Cyber Security services.
Strategize: Think widely about potential applications of machine learning and natural language processing techniques, translating capabilities built for individual clients into new features or customization options for our core products. Must hold an active Top Secret (TS) security clearance; TS/SCI strongly preferred.. Familiarity with machine learning workflows and core data science concepts (e.g., classification vs. We are a series D funded company with investors from Addition, USIT, Lux Capital, Amplify Partners, Addition Capital, Bloomberg Beta, and others. This includes full medical, dental, and vision coverage, fertility benefits through Carrot, mental health coverage on demand with Headspace Care+, Gympass+ Membership via Wellhub, One Medical Membership, 401(k), remote work stipends, and monthly internet allowance.
If you're ready to take your pre-sales career to the next level, the Dataiku Sales Engineering team would like to hear from you.. Qualify deals through collaboration with the Account Executive (AE), the Business Development Representative (BDR), and sales management.. Conduct Discovery meetings and learn from the customer and the BDR about the customer's business requirements and technical environment.. Assist in sales pipeline building activities including attendance at live and/or virtual trade-shows and industry conferences, working with marketing and or partners on campaign design and execution and other activities specified by sales and pre-sales management.. Familiarity with data storage and computing infrastructure for data of all sizes (SQL, NoSQL, Kubernetes, Spark, etc)
Our patented technology is based on over 11 years of research at the California Institute of Technology and NASA Jet Propulsion Laboratory.. Leverage integrations with big data frameworks (e.g. Databricks) as needed to develop solutions for customers.. 3+ years of experience with Python Data Stack: pandas, numpy, sklearn, TensorFlow, PyTorch, matplotlib, etc.. and web-app development stacks (e.g. Flask/Django). Our benefits include highly competitive pay, equity, fully paid health, vision, and dental insurance for you + your dependents, and unlimited PTO.
Knowledge Management, Inc. (KMI) has the leadership and experience to deliver innovative technology, logistics and management solutions to meet real mission requirements.. KMI is a Minority Business Enterprise (MBE) and Small Disadvantage Business (SDB) that specializes in Logistics, Warehouse Services, Distance Learning/Training, Enterprise Solutions, Financial Management Support, Program Management, Intelligence Analysis & Threat Assessment, and Data Analytics/Operations Research.. Polygraph Type: Counterintelligence. Machine learning models and artificial intelligence techniques. Databricks, Apache Spark certifications (preferred)
Founded in 1961 to help the Department of Defense resolve complex logistics management challenges, LMI continues to enable growth and transformation, enhance operational readiness and resiliency, and ensure mission success for federal civilian and defense agencies.. Responsible for providing advanced data analytics and predictive strategic workforce planning via on-demand, intuitive web-based capabilities and reporting.. Experience developing dashboards using Tableau, Qlik, Power BI, RShiny, plotly, or d3.. Experience with data science methods related to data architecture, data munging, data and feature engineering, and predictive analytics.. Previous experience with people analytics
Develops computational algorithms and statistical methods to find patterns and relationships in multiple database sources.. Identifies and utilizes predictive modeling and machine learning techniques to improve operational processes and agency's overall data quality.. of MS SQL Server 2008 or higher, Oracle, or IBM DB2, involving implementation of schemas, indexes, and query optimization.. · Develops computational algorithms and statistical methods to find patterns and relationships in multiple database sources.. · Identify and utilize predictive modeling and machine learning techniques to improve operational processes and agency's overall data quality.
Have the ability to outline the architecture of Spark, Scala, and Cloudera environments.. Guide customers on architecting and building data engineering pipelines on Snowflake, including streaming data ingestion and processing.. Weigh in on and develop frameworks for Distributed Computing, Apache Spark, PySpark, Python, HBase, Kafka, REST based API, and Machine Learning as part of our tools development (Snowconvert) and overall modernization processes.. Experience in Data Warehousing, Business Intelligence, application modernization, lakehouse implementations, or Cloud projects, including building realtime and batch data pipelines using Apache Spark and Apache NiFi.. Experience with implementing Apache Iceberg or other open table format(Delta,Hudi) based lakehouse solutions.
NOTE: Client is not looking for a Data Architect, Primarily they are looking for a Senior Data Modeler with Strong SQL and Python Exp.. Data Platforms - Hadoop, Spark, Oracle, Exadata, SQL Server, DB2, MongoDB, Teradata, Splunk. Data Transformation - Informatica, SSIS, Alteryx, Trifacta. BI - Tableau, Microstrategy, metadata/lineage tools, Distributed Caching tools.. Familiar with Data Governance process - identify critical data elements, metadata, data lineage, data dictionary, data model, Data Sharing Agreements, Service Level Agreements, Data Management Standards.
Join the T-Mobile Fiber Team as a Data Scientist. Were looking for a curious, analytical, and results-driven Data Scientist to join our T-Mobile Fiber team.. You wont just crunch numbersyoull shape the future of T-Mobile Fiber.. At T-Mobile, employees in regular, non-temporary roles are eligible for an annual bonus or periodic sales incentive or bonus, based on their role.. As part of the T-Mobile team, you know the Un-carrier doesnt have a corporate ladderits more like a jungle gym of possibilities!
Embracing increased ambiguity, you are comfortable when the path forward isn't clear, you ask questions, and you use these moments as opportunities to grow.. As part of the Data Science and Machine Learning Engineering team you will design and develop AI/ML systems that transform client operations.. As a Senior Associate, you will lead projects from conception to production, mentoring others while engaging clients to align technology with business objectives.. Working with AI frameworks like Pytorch and Tensorflow. For only those qualified applicants that are impacted by the Los Angeles County Fair Chance Ordinance for Employers, the Los Angeles' Fair Chance Initiative for Hiring Ordinance, the San Francisco Fair Chance Ordinance, San Diego County Fair Chance Ordinance, and the California Fair Chance Act, where applicable, arrest or conviction records will be considered for Employment in accordance with these laws.
You’ll start out by running and maintaining pre-built notebooks, performing ad hoc data investigations, interpreting results, and flagging anomalies.. Master’s degree required in quantitative field: epidemiology, psychology, health administration, public health, computer science, statistics, data science, economics, mathematics, engineering or related field.. Familiarity with programming notebooks (e.g., Jupyter, Databricks) and tools like Pandas or NumPy or similar.. Experience writing SQL to query relational databases (e.g., Athena, postgreSQL, MS Access).. Applied Sr Data Scientist - Fintech Foundation Seattle, WA $110,000.00-$375,000.00 1 week ago
We are using models early to fail less often, executing clinical trials to add valuation to the company, and generating fit-for-purpose data to feed back into Valo's Opal Computational Platform as we reinvent drug discovery and development from the ground up.. Proficient in Python, SQL, and modern cloud platforms (GCP, AWS, etc).. Valo Health, LLC ("Valo") is a technology company built to transform the drug discovery and development process using human-centric data and artificial intelligence-driven computation.. As a digitally native company, Valo aims to fully integrate human-centric data across the entire drug development life cycle into a single unified architecture, thereby accelerating the discovery and development of life-changing drugs while simultaneously reducing costs, time, and failure rates.. Founded by Flagship Pioneering, Valo is headquartered in Lexington, MA with tissue engineering research based in New York, NY.
This role specially supports data science efforts in Pharma Supply Chain (PSS) Transportation, DC Operations, and Inventory Management area.. This role will focus on leveraging advanced analytics, machine learning, and optimization techniques to solve complex challenges related to operations management, inventory optimization, logistics efficiency and risk mitigation.. 4-8 years of experience in Data Science, ML Ops, Data Analytics, preferred. Bachelor's degree in Data Science, Computer Science, Engineering, Operations Research, or related field preferred. Strong programming skills in Python, R, SQL
Our capabilities range from C5ISR, AI and Big Data, cyber operations and synthetic training environments to fleet sustainment, environmental remediation and the largest family of unmanned underwater vehicles in every class.. Huntington Ingalls Industries (HII) Mission Technologies Warfare Systems partners with the DoD and defense innovation ecosystem to rapidly acquire and field critical and emerging technologies, particularly integrated communications, networking, and Systems-of-Systems (SoS) technologies, to enhance national security and warfighter capabilities.. Through a multiagency contracting approach, the Collaborative Operations for Battlespace Resilient Architecture (COBRA) initiative focuses on advancing these technologies to achieve multi-domain battlespace integration and resilient command and control.. Conducts long‐term data and trend analysis to identify systemic vulnerabilities and high priority threats, recommends strategies to mitigate threats and challenges to the supply chain, providing best practices for implementing Artificial Intelligence (AI) and Machine Learning (ML).. 5 years relevant experience with Bachelors in related field; 3 years relevant experience with Masters in related field; 0 years experience with PhD or Juris Doctorate in related field; or High School Diploma or equivalent and 9 years relevant experience.
Possess a strong foundation in machine learning (ML), natural language processing (NLP), and data science, and develop expertise with Generative AI. Balance broad AI and ML expertise with hands-on GenAI experimentation and solution-building, helping accelerate AI adoption across the enterprise.. Experience with data visualization or dashboarding tools, including Tableau or Power BI. Experience with distributed data and computing tools, including Databricks using Spark. Bachelor's degree in Data Science, Mathematics, CS, Physics, Statistics, or Quantitative So cia l Sciences. As part of the application process, you are expected to be on camera during interviews and assessments.
Steward data security throughout all aspects of our projects by use of procedures and best in concordance with our data governance policies legal and ethical requirements.. The most relevant degrees include Statistics, Biostatistics, Data Science, Computer Science, Epidemiology, and Mathematics.. Data engineering and software engineering skills such as automation, data pipeline orchestration, data modeling, use of Databricks and related tools, etc.. Mastery of core statistical concepts as well as knowledge of quality measure risk-adjustment, reliability, validity, and machine learning.. Data Analyst - People Analytics (Short-Term Employment) Washington, DC $124,000.00-$176,000.00 1 week ago
We are seeking a top-tier Data Scientist with TS/SCI clearance who is passionate about conducting advanced data science research and analytics.. Join us in supporting a significant program aimed at enhancing mission-critical big data and predictive analytics capabilities.. B.S. Degree in a quantitative field such as Computer Science, Mathematics, Economics, Statistics, Engineering, Physics, or Computational Social Science; or a master's degree with equivalent advanced training.. Experience deploying data science applications (e.g., Streamlit) using containerization technologies like Docker or Kubernetes.. Knowledge in social network analysis, natural language processing, and image processing.
Business casual dress, flexible work schedules, remote and hybrid work opportunities, and tuition reimbursement are a few of our many work-life benefits available to our employees.. Responsible for providing advanced data analytics and predictive strategic workforce planning via on-demand, intuitive web-based capabilities and reporting.. Experience developing dashboards using Tableau, Qlik, Power BI, RShiny, plotly, or d3.. Experience with data science methods related to data architecture, data munging, data and feature engineering, and predictive analytics.. Previous experience with people analytics
We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives.. 5+ years of experience hosting and deploying GenAI/ML solutions (e.g., for data pre-processing, training, deep learning, fine tuning, and inferences) or/and Data Science Experience. Ideally, the candidate has AWS Experience with a proficiency in a wide range of AWS services (e.g. SageMaker, Bedrock, EMR, S3, OpenSearch Service, Step Functions, Lambda, and EC2). Hands on experience with deep learning (e.g., CNN, RNN, LSTM, Transformer), machine learning, CV, GNN, or distributed training. Experience with Data Analytics, Data Engineering, Coding, Automation and Scripting (e.g., Terraform, Python)