For more information on any of the open requisitions below please contact: jobs@vectorspacebio.science
Computational Biologist - Space Biosciences/Bioinformatics
Job Description & Key Responsibility
Develop data analysis strategies, write algorithms, and deploy computational tools for the exploration of proteomics and single-cell datasets (scRNAseq, scATACseq, CITEseq)
Work with software engineering, data science, genomics, biochemistry, and proteomics teams on the development of novel platforms for extracting biological insight from experimental data at multiple spatial and temporal scales
Interact closely with scientists in discovery & translational research, understand their data manipulation and analysis needs, provide answers to technical questions through 1-on-1 communication, presentations, and written documents
Job Requirements
Background & Desirable Skills
- PhD in Statistics, Computer Science, Computational Biology, Bioinformatics, Bioengineering, or a related field; a Masters in one of the above fields with 2+ years of experience in academia or industry; or a BS with 4+ years in academia or industry.
- Experience in analysing and obtaining actionable insights from high throughput sequencing (HTS/NGS) data is required. Some examples include RNA-seq, ATAC-seq, WGS, scRNA-seq, among the many other types of HTS data.
- Knowledge in experimental design and experience working with teams in designing experiments.
- Experience with language modeling, deep learning and related practical applications
- Experience with at least one of the following
- Microsoft Azure Space/SpaceX
- Python
- Linux command line
- Bioinformatics
- Experience with DNA repair pathway analysis
- Knowledge of precision and personalized medicine, drug repurposing and repositioning, multiomics including nutrigenomics and epigenomics, CRISPR
Research areas of interest
- Experience working with companies and space agencies such as Virgin Galactic, SpaceX, Blue Origin, NASA Space Biosciences, ESA, JAXA et al
- Knowledge of GCR (Galactic Cosmic Rays), HZE (High-energy and high-charge ions), Bragg peak and ‘track’ correlation analysis related to DNA repair pathways along with high/low LET (Linear Energy Transfer) radiation, telomere elongation/shortening, chromosomal translocations and dysregulated gene expression and additional multiomics research in connection to space biosciences.
- Experience with software tools used in Space Biosciences e.g.
- NASA GeneLab
- Unsupervised learning and experimental clustering
- Bioinformatics toolsets
- Radiation affecting the microenvironment
- Exosomic cargo
- Biochemical cascades
- ECM and Brain ECM
- Dynamic Reciprocity
- TME (Tumor microenvironment)
- Exosomes
- DNA repair pathways, time to repair data and factors
- Biomarkers which can be used to predict amount of time for DNA repair cycles to complete * This data point can be used for nanoscale modifications to sheilding and sheilding strategy in real-time
- Knowledge of key targets of particle damage correlated to type of particle and track:
- DNA bases/genes
- Carbohydrates
- Proteins
- Lipids
- Mitochondria
- Blood cells
- Membrane receptors
- Cell adhesion molecules
- ECM
- Immune cells
- Stem cells
- Endothelium
- Exosomes
- Knowledge of key effects of particle damage correlated to type of particle and track:
- Clustered DNA damage
- Persistent mutations and chromosome aberrations
- Reduced DNA and cellular repair
- Drastic G M block and altered cell cycle kinetics
- Enhanced cytokine activation
- Tumorigenesis at high dose, high-LET or HZE
- Apoptosis, autophagy, senescence, mitotic catastrophe, necrosis
- Altered gene expression and differentiation
- Changes in cell-cell comms and non-targeted effects
- Changes in cell adhesion and motitlity
- Changes in angiogenesis
- NASA’s Human Research Roadmap (HRP), NASA GeneLab and Biospecimen Sharing Program (BSP),
- Wetlab exposure
- Side projects
International Traffic in Arms Regulations (ITAR) Requirements
- To conform to U.S. Government space technology export regulations, including the International Traffic in Arms Regulations (ITAR) you must be a U.S. citizen, lawful permanent resident of the U.S., protected individual as defined by 8 U.S.C. 1324b(a)(3), or eligible to obtain the required authorizations from the U.S. Department of State. Learn more about the ITAR here.
Artificial Intelligence/Machine Learning Engineer - Space Biosciences
Description & Key Responsibility
Develop experimental and formal NLP/NLU language models that mimic portions of human cognition
Develop experimental and formal deep learning models that reach near human-level accuracy in mimicking the research process of molecular biologist
Job Requirements
Background & Desirable Skills
- 3+ years of experience developing computational models building and deploying experimental or production-grade data pipelines
- Strong Python fundamentals
- Experience with at least one of the following
- Microsoft Azure Space/SpaceX
- Linux command line
- Good basic understanding of NLP/NLU language modeling, ensemble methods in data engineering pipelines and correlation matrix datasets
Research Areas of Interest
- Experience working with companies and space agencies such as Virgin Galactic, SpaceX, Blue Origin, NASA Space Biosciences, ESA, JAXA et al
- Knowledge of GCR (Galactic Cosmic Rays), HZE (High-energy and high-charge ions), Bragg peak and ‘track’ correlation analysis related to DNA repair pathways along with high/low LET (Linear Energy Transfer) radiation, telomere elongation/shortening, chromosomal translocations and dysregulated gene expression and additional multiomics research in connection to space biosciences.
- Knowledge of precision and personalized medicine, drug repurposing and repositioning, multiomics including nutrigenomics and epigenomics, CRISPR
- Experience with:
- Computational Cognition
- Computational Linguistics
- Computational Neuroscience
- Computational Biology
- Bioinformatics
- NASA GeneLab
- Wetlab experience
- Side projects
International Traffic in Arms Regulations (ITAR) Requirements
- To conform to U.S. Government space technology export regulations, including the International Traffic in Arms Regulations (ITAR) you must be a U.S. citizen, lawful permanent resident of the U.S., protected individual as defined by 8 U.S.C. 1324b(a)(3), or eligible to obtain the required authorizations from the U.S. Department of State. Learn more about the ITAR here.
Data Visualization & Interpretation - Space Biosciences
Description & Key Responsibility
Able to design/develop industry-leading data visualizations based on an understanding of common and more fine-grained data visualization/representation challenges, data problems, and eventually; relating to the use cases of our current/prospective customers.
Understands the tradeoffs between graphics paradigms and speaking multiple visualization grammars, and how they relate to the data we have and that we’re trying to represent. (Recognizing that we have scales and what it means when these data oppose each other on different axis).
In this role, you will lead data-driven decisionmaking with the team about which languages, frameworks, and libraries we should use to visualize customer data that is best matched to the twin challenges of exploratory data analysis and analytic presentation.
Cultivate your knowledge and ours, and help educate our team in your areas of expertise.
Job Requirements
Background & Desirable Skills
- Deep domain expertise of data science, statistical analysis, and data visualization
- Adept at interacting with JSON REST APIs with standard tools (e.g., Postman)
- A generalist with working knowledge of data visualization libraries and packages used today: Python (SciPy/NumPy/pandas, Seaborn, Bokeh, etc.), R (ggplot2, grid), and JavaScript (D3.js, Vega, Plotly), etc.
- Fluency with a Git/GitHub version control workflow (though not primarily a coding role, you’ll work closely with our team of developers)
- Strong Python fundamentals
- Experience with at least one of the following
- Microsoft Azure Space/SpaceX
- Google Cloud Platform
- AWS
- Good basic understanding of servers
Nice to Haves
- Biological data visualization
- Graph Networks
- Side projects
International Traffic in Arms Regulations (ITAR) Requirements
- To conform to U.S. Government space technology export regulations, including the International Traffic in Arms Regulations (ITAR) you must be a U.S. citizen, lawful permanent resident of the U.S., protected individual as defined by 8 U.S.C. 1324b(a)(3), or eligible to obtain the required authorizations from the U.S. Department of State. Learn more about the ITAR here.
Mid-level Data Engineer
Description & Key Responsibility
As a mid-level Data Engineer you will be responsible for improving the architecture of a general-purpose data pipeline used to do ETL of various data sources: crawled web pages, various APIs, PDFs, text files, etc.
You will build, maintain, scale and support existing data pipelines and deploy them on a Cloud Service Provider. You will ensure the proper storing of the raw and processed data.
You will set up monitoring for this pipeline, re-run on failure and update the pipeline with improvements without disrupting ongoing operations.
You will integrate this pipeline to feed data to our machine learning / language models and archive the model's artifacts (weights, hyperparameters, model training code versions) at each run, as well as load these artifacts into an API server. You will have the opportunity to learn more about NLP / NLU language models and how to train and evaluate them.
You are expected to share knowledge with the team and mentor junior data engineers.
Job Requirements
Background & Desirable Skills
- 2-4 years of experience building and deploying production-grade data pipelines
- Strong Python fundamentals
- Experience with at least one of the following
- Microsoft Azure Space/SpaceX
- Linux command line
- Good basic understanding of NLP/NLU language modeling, ensemble methods in data engineering pipelines and correlation matrix datasets
Nice to Haves
- Experience with a data orchestration tool
- Dagster
- Airflow
- Experience in CI/CD, DevOps
- Experience in API engineering
- Experience mentoring junior engineers
- Basic experience with training of machine learning models and archiving of artifacts
- Side projects
- Technical blog
International Traffic in Arms Regulations (ITAR) Requirements
- To conform to U.S. Government space technology export regulations, including the International Traffic in Arms Regulations (ITAR) you must be a U.S. citizen, lawful permanent resident of the U.S., protected individual as defined by 8 U.S.C. 1324b(a)(3), or eligible to obtain the required authorizations from the U.S. Department of State. Learn more about the ITAR here.
Mid-level NLP / NLU Engineer
Description & Key Responsibility
As a mid-level NLP / NLU Engineer you will be responsible for the training and evaluation of the language models we use to find hidden relationships in proprietary and public data which we may obtain through our partners or crawl from websites, APIs, and files.
You will examine data thoroughly to find out how it is best to clean, normalize and/or preprocess and create stop words lists. If you find that you lack data for the model to perform well, you will be expected to help the Data Engineering team find sources of data the model needs.
You will research about model architectures and choose the right ones through experimentation. These experiments must be replicable (via Jupyter notebooks, or your preferred experimentation stack) and the results of which should be meticulously logged. You will train and optimize these models, logging the hyperparameters, metrics and performance of each run.
You will be asked to experiment on ensembles of language models, optimizing for the business-specific metric our partners care about. For example, for financial institutions this would be the Sharpe or Sortino ratio.
You will create modules for different models, which will be used inside our pipelines to load saved models and run inference on incoming data.
You will help us improve our correlation matrix datasets by tweaking the code that generates it. You will research on the best methods to achieve context control (e.g. how similar are Moderna and AstraZeneca in the context of space biosciences? How about in the context of DNA repair?)
You are expected to share your NLP / machine learning knowledge to upskill the team and mentor more junior NLP engineers.
Job Requirements
Background & Desirable Skills
- 1-4 years of experience training and evaluating a variety of language models and deploying them to production
- Strong machine learning fundamentals and Python skills
- Proficiency in at least one machine learning library like pytorch, tensorflow, keras, fastai, etc
- scientific mindset and interest in research and development
Nice to Haves
- Experience in MLOps / the deployment and serving of models
- Experience in experiment tracking tools e.g. Weights & Biases
- Actual NLP / NLU research experience (published papers, experiments)
- Experience in web crawling, pulling data from APIs, and processing different filetypes
- Experience mentoring junior NLP / NLU practitioners
- Side projects
- Techinical blog
International Traffic in Arms Regulations (ITAR) Requirements
- To conform to U.S. Government space technology export regulations, including the International Traffic in Arms Regulations (ITAR) you must be a U.S. citizen, lawful permanent resident of the U.S., protected individual as defined by 8 U.S.C. 1324b(a)(3), or eligible to obtain the required authorizations from the U.S. Department of State. Learn more about the ITAR here.