About Us: At AssistIQ we are dedicated to creating a more efficient and transparent healthcare supply chain by fixing one of the core problems - providers lack accurate data and insights on their supply and implant usage. Our AI-driven software solution provides highly accurate, seamless capture of supply and implant usage in real-time, and generates actionable insights to healthcare systems, enabling better revenue capture and reduced waste, ultimately leading to better value of care and better outcomes for patients. About the Role:
In the role of ML Engineer, you'll transform prototypes and experimental models developed by Data Scientists into scalable, maintainable, and production-ready Python applications. You'll also integrate these solutions with databases and cloud services, as well as with the UX interface, ensuring performance, reliability, and alignment with broader engineering standards. You will leverage best practices, supporting the development of a scalable implementation model that serves our customers. Your ultimate goal is to deliver stable and successful solutions to our customers. We're excited by candidates who enjoy and are capable of working in a fast-paced entrepreneurial environment.
To be successful, you'll need to combine strong Python development skills with a pragmatic understanding of how to turn experimental code into robust, scalable, and cloud-ready applications. Equally important, you'll thrive in close collaboration with data scientists, engineers, product managers, and other cross-functional team members. Given the nature of startup life, this role is dynamic with priorities evolving regularly and with strong delivery commitments. Responsibilities:
Productionize machine learning models and data science workflowsTranslate Jupyter notebooks code into clean, modular Python codeProficiency in developing and debugging code within Jupyter NotebooksRefactor and optimize algorithms for efficiency, scalability, and maintainabilityPackage models into deployable components (e. g.
, Docker containers, Python packages)Implement model inference pipelines and batch or streaming prediction jobsMonitor and troubleshoot performance of models in productionCollaborate with Data Scientists to validate model behavior and output post-deploymentCommunicate with various customer stakeholders with project updates throughout the implementation processIdentify and escalate potential risks to the implementation timeline in a timely mannerDevelop and maintain backend Python services and APIsDesign, build and maintain a Python-based framework that enables repeatable development and seamless deployment of machine learning models and advanced analytics solutionsHandle input validation, data preprocessing, and result formatting in servicesWrite automated tests to ensure code reliability and reproducibilityIntegrate logging, exception handling, and versioning in deployed servicesManage dependency configuration and environment setup for deploymentsOptimize response time and throughput for model-serving endpointsCollaborate on cloud and database integration for solid and scalable deploymentInterface with cloud platforms (e.
g. AWS, GCP) for deployment and storageWork with relational and non-relational databases (e. g. PostgreSQL, BigQuery, etc. )Implement data ingestion and feature retrieval pipelinesEnsure secure and compliant access to sensitive dataContribute to development workflows for seamless deployment and updatesRequirementsRequirements: 3+ years of hands-on Python programming experience, with strong knowledge of software engineering best practicesProven experience turning data science prototypes into production-grade code and services2+ years of deploying and supporting Python based ML workloads and ETL data pipelinesFamiliarity with machine learning concepts and workflows, even if not building models from scratchExperience deploying and maintaining applications in cloud environments (e. g.
, AWS, GCP, Azure)Experience or knowledge with Apache Airflow or similar workflow orchestration tools for building and managing data and ML pipelinesSolid understanding of database technologies, including both SQL and NoSQL systemsProficiency with development tools such as Git, Docker, Makefiles, virtual environments, and testing frameworksAbility to build and document modular, reusable, and testable code for long-term maintainabilityStrong problem-solving mindset, with the ability to work independentlyAbility to adapt quickly and switch between tasks or priorities in a fast-paced, dynamic start-up environmentExcellent communication and collaboration skills, with a willingness to work closely with Data Scientists, Engineers, and Product teamsBenefitsHealth insurance Business travel when needed3 weeks of vacation 10 sick daysFlexible work hoursHybrid in Toronto or Montreal
Customize your resume to highlight skills and experiences relevant to this specific position.
Learn about the company's mission, values, products, and recent news before your interview.
Ensure your LinkedIn profile is complete, professional, and matches your resume information.
Prepare thoughtful questions to ask about team dynamics, growth opportunities, and company culture.