Solventum is seeking a highly motivated Senior Cloud Data Engineer to help shape the future of global health informatics.. Develop, Build and maintain AWS data platform services such as S3, RDS, Aurora, Redshift, PostgreSQL, DynamoDB using cloud deployment pipeline.. Manage healthcare data and maintain HIPAA, SOC2, FedRamp, StateRAMP data security controls.. Deep experience with AWS S3, AWS RDS, Aurora PostgreSQL, and MSSQL.. Experience with stream processing tools like Amazon Kinesis, Apache Spark, Storm, or Kafka.
SAIC is seeking a highly motivated Data Scientist to join our team of professionals providing technical and program services to propel the organization forward at a location near Annapolis Junction, MD.. Design and execute advanced data science solutions for technical evaluation and decision support.. Establish and maintain sophisticated knowledge and portfolio management systems to contribute to complex system integration and architectural design processes.. Experience with SIGINT, CNO, or cybersecurity operations.. Knowledge of strategic planning for increasing the scope and sustainability of Machine Learning.
Develop analytics using Java MapReduce and Python in the Linux (Red Hat version 7+) environment.. US Citizens Only TS/SCI with Polygraph required Bachelor's degree plus eight (8) years of relevant experience or Master's degree in plus six (6) years of relevant experience.. Experience with serialization such as JSON and/or BSON; developing restful services; and using source code management tools.. Atlassian Jira & Confluence SIGINT Analysis Experience with Gitlab & Maven Kanban Agile process experience Compensation Range: $125,786.98 - $238,638.89. level of position, complexity of job responsibilities, geographic location, candidate’s scope of relevant work experience, educational background, certifications, contract-specific affordability, organizational requirements and alignment with local market data.
Formed in 2010, CyberTrend has long-standing partnerships and clients, including partners like AWS and IBM, government agencies, the intelligence community, and defense contractors.. The Software Engineer develops, maintains, and enhances complex and diverse software systems (e.g., processing-intensive analytics, novel algorithm development, manipulation of extremely large data sets, real-time systems, and business management information systems) based upon documented requirements.. ; Shall have demonstrated work experience with the Map Reduce programming model and technologies such as Hadoop, Hive, Pig, etc.. ; Shall have demonstrated work experience with the Hadoop Distributed File System (HDFS); Shall have demonstrated work experience with Serialization such as JSON and/or BSON. CyberTrend contributes 10% of your monthly gross salary to a SEP-IRA account.
Syntes AI, Inc. is the creator of the Syntes AI software platform—a cloud-based solution for Multi-Domain Data Management and Data Analytics using Artificial Intelligence and Machine Learning technologies.. Bachelor’s or Master’s degree in Applied Mathematics, Computer Science, Software Engineering, or a related field.. Experience with using front-end frameworks such as Vue, React, Angular, JQuery, AJAX.. Knowledge of developing web applications using ecommerce store and website development platforms such Shopify (Liquid), Symfony, WordPress.. Knowledge of containerization and orchestration technologies like Docker and Kubernetes.
CTC Group is seeking Data Scientists, levels 1-2, for a contingent program to develop machine learning, data mining, statistical and graph-based algorithms to analyze and make sense of datasets.. Develop and train machine learning systems based on statistical analysis of data characteristics to support mission automation. Active TS/SCI with polygraph security clearance. Two (2) years of relevant experience programming with data analysis software such as R, Python, SAS, or MATLAB.. Five (5) years of relevant experience programming with data analysis software such as R, Python, SAS, or MATLAB.
Database design using optimum schema for requirements: Star, Snowflake, Relational, Hierarchical, Network, or Flat. Strong experience in machine learning, predictive analytics, data modeling, business intelligence tools, data analytics, data collection, and generating reports. AWS Certification a plus: Any: Data Engineer, DevOps Engineer, Solutions Architect. Experience with PostgreSQL and NoSQL databases such as MongoDB; RedShift, data warehouse, data lake, data wrangling, and data pipelines.. Experience with data pipeline and workflow management tools: Airflow, etc.
About the OpportunityIn this exciting Postdoctoral Fellow position, you will work to develop a cutting-edge AI-informed bioengineered model for predicting human responses to genetic (e.g. mRNA) nanomedicine/vaccine : This project is supported by the Vaccine and Regenerative Medicine Drug Product Team within Biopharmaceutical Development Organization.. Mentoring and team support: You will be mentored and guided by Beata Chertok, a former academic professor (University of Michigan) and current Associate Principal Scientist/Team Lead in the Delivery of Intracellular Biologics at AstraZeneca - an inspiring leader in mRNA nanomedicines and a scientific authority in nanoengineering, targeted gene delivery and experimental modeling of nanomaterials behavior within biological systems.. In addition, you will also receive academic co-mentorship from John Tsang, Professor of Immunobiology and Biomedical Engineering at Yale University - an extraordinary scientific authority in computational modeling of the immune system and a leader with vision in Human Systems Immunology.. the roleOverviewYou will use your biomaterials/tissue engineering expertise to develop a novel micro-physiological model that implements a unique bio mimetic path for rapid benchmarking of human responses to genetic vaccines and therapeutics.. Validated expertise in vitro 2D and 3D cell culture/cell reprogramming, including working with distinct primary cell types and proficiency in bio-analysis techniques such as Flow Cytometry, Confocal Microscopy, PCR, and multiplex ELISA.Proven experience in developing and analysing biomaterial-based scaffolds for tissue engineering.
Use Atlassian tools, incl Jira Software, Jira Service Management, + Confluence to document, manage, resolve, + escalate prod + test environment bugs + enhancements, providing root cause analysis + issue replication.. Monitor + troubleshoot failures w/ Google Kubernetes Engine (GKE)/Azure Kubernetes Service pods + other components.. Build + respond to alerts + outage notifications using Splunk, Grafana, Hystrix, Rigor, New Relic, Prometheus, SCOM, Opsgenie, and other tools.. Learn complex, custom-dev apps unique to intrastate/ interstate alcohol sales, ensuring state-specific compliance req’ts met + maintained.. Messaging systems (Kafka, Apache Spark, RabbitMQ, + Azure Service Bus).
Lead the design, development, and deployment of PySpark-based big data solutions. Architect and optimize ETL pipelines for structured and unstructured data. Implement best practices in data engineering (CI/CD, version control, unit testing). Work with cloud platforms like AWS •Ensure data security, governance, and compliance. Digital : Amazon Web Service(AWS) Cloud Computing, Digital : PySpark
Position- Data Science.. Sr. Technical Recruiter | Syntricate Technologies Inc. Minority Business Enterprise (MBE) Certified | E-Verified Corporation | Equal Employment Opportunity (EEO) Employer.. This e-mail message may contain confidential or legally privileged information and is intended only for the use of the intended recipient(s). Any unauthorized disclosure, dissemination, distribution, copying or the taking of any action in reliance on the information herein is prohibited.
Shall have demonstrated a willingness to learn new technologies and leverage senior-level resources to expand the current technical foundation using team structure. Shall have two (2) years of experience writing software scripts using scripting languages, including bash, perl, or python. Shall have demonstrated experience with configuration management tools, including Puppet and SALT. Shall have experience diagnosing and troubleshooting large-scale cloud computing systems, including familiarity with distributed systems e.g. Hadoop, CASSANDRA, SCALITY, SWIFT, Gluster, Lustre, GPFS, Amazon S3, or another comparable technology for big data management or High-performance computing. Familiarity with software load balancers for large-scale web service implementations including haproxy and nginx experience with Kubernetes orchestration services and Docker images.
Whether you're looking to write mobile app code, engineer the servers behind our massive ad tech stacks, or develop algorithms to help us process trillions of data points a day, what you do here will have a huge impact on our business-and the world.. Technical Innovation: Pioneer new consumer data applications by developing POCs that advance our data registration, discovery, and segmentation capabilities while strengthening regulatory compliance and enhancing the overall user experience.. 2+ years hands-on experience with Google Cloud Platform ecosystem (BigQuery, Dataproc, Composer, Dataflow, BigTable) or AWS equivalent.. 2+ years working with Hadoop technologies and distributed computing frameworks (Spark, Kafka, Hive, HBase). Fluency with at least one object-oriented programming language from Java, Python, or Scala is highly desirable, as these skills are critical for developing robust applications and managing data workflows effectively.
Lead Software Engineer - Azure Machine Learning.. In this fully remote role, you'll collaborate closely with data scientists, analysts, and business stakeholders to build scalable, cloud-native applications leveraging Microsoft Azure's machine learning and data platforms. This is an exciting opportunity for a hands-on technical leader with a strong background in software architecture, machine learning lifecycle management, and cloud-native development using. 5+ years working with optimization and forecasting models and full ML lifecycle using Microsoft Azure (Azure ML, Azure Databricks, Azure DevOps). Strong hands-on experience building microservices-based applications using C# and.
Programming skills in Python (Django, DRF, FastAPI) or React/TypeScript with Material UI, Git, SQL, Playwright/Cypress Test Libraries, Testing, Debugging.. Experience with Data Visualization, RESTful APIs, RESTful Web Services, Orchestration and Containerization (e.g. Kubernetes, Docker).. Experience with Golang, Kotlin/Java, and/or Python. Must be local or willing to relocate to the DMV Metropolitan area. Experience with asynchronous messaging systems (RabbitMQ, Apache Kafka, etc.)
The Software Engineer shall be responsible for the design, development, and deployment of a Retrieval Augmented Generation (RAG) solution deployed in a HPC Linux environment.. Candidates applying for this position must be familiar with LLMs (Large Language Models), LLM orchestration frameworks, knowledge retrieval, and security-aware AI systems.. Experience with SQL, Elasticsearch, and Vector databases. Knowledge of AI inferencing solutions such as Nvidia NIM/TRITON, vLLM, and direct deployment (e.g. Ray). Experience with the Atlassian suite of tools including Confluence and Jira
You will identify appropriate algorithms and approaches for solving critical communications problems using advanced signal processing, data processing, and machine learning techniques.. Have the ability to travel within the continental United States; Travel to sponsor sites local to APL may also be required.. APL’s campus is located in the Baltimore-Washington metro area.. The referenced pay range is based on JHU APL’s good faith belief at the time of posting.. APL provides eligible staff with a comprehensive benefits package including retirement plans, paid time off, medical, dental, vision, life insurance, short-term disability, long-term disability, flexible spending accounts, education assistance, and training and development.
The Lead Big Data Engineer, working independently, will develop, test, debug and document software components commensurate with their experience as well as direct development staff in support of a software engineering effort.. Build efficient and reliable ETL processes using Apache Spark and cloud-native tools on AWS. FINRA offers immediate participation and vesting in a 401(k) plan with company match and eligibility for participation in an additional FINRfunded retirement contribution, tuition reimbursement, commuter benefits, and other benefits that support employee wellness, such as adoption assistance, backup family care, surrogacy benefits, employee assistance, and wellness programs.. Other paid leave includes military leave, jury duty leave, bereavement leave, voting and election official leave for federal, state or local primary and general elections, care of a family member leave (available after 90 days of employment); and childbirth and parental leave (available after 90 days of employment).. FINRA employees are required to disclose to FINRA all brokerage accounts that they maintain, and those in which they control trading or have a financial interest (including any trust account of which they are a trustee or beneficiary and all accounts of a spouse, domestic partner or minor child who lives with the employee) and to authorize their broker-dealers to provide FINRA with duplicate statements for all of those accounts.
These insights influence business strategy, inform channel design, and predict client behaviour.. Integrate analytics into operational processes to improve efficiency and client experience.. Ensure compliance with statutory, legislative, policy, and governance requirements.. 5-8 years' experience in a Data Science role, with 1-2 years in team management.. Honours Degree in Mathematics, Engineering, or Actuarial Science.
Enlighten, honored as a Top Workplace from USA Today, is a leader in big data solution development and deployment, with expertise in cloud-based services, software and systems engineering, cyber capabilities, and data science.. Security Clearance - A current Secret U.S. Government Security Clearance is required with the ability to obtain TS/SCI level clearance; U.S. Citizenship required.. Experience with configuration management tools (e.g. Git, Nexus, Maven). Experience with NiFi, Kafka, AWS Infrastructure, and K8's.. Experience in distributed databases, NoSQL databases, full text-search engines (e.g. Elasticsearch, MongoDB, Solr)