This role is centered around the development of our custom-built "Ghost" robot, designed specifically to automate retail operations in Japanese convenience stores. The robot performs pick-and-place operations without human intervention, aiming to achieve a high level of autonomy and operational reliability. The core responsibility of this role is to improve and expand the robot's perception capabilitiesâparticularly through the development of vision-based systems that enable it to understand and act within complex, dynamic store environments. This includes leveraging sensor data (e. g.
, RGB, depth, and other modalities) collected from robots already deployed in real-world settings to enhance the performance of the perception system. The work involves researching, refining, and implementing new perception algorithms to further improve success rates, while also ensuring that deployed models are optimized and compressed to run efficiently on the systemâs constrained hardware. We strongly believe that our ghost robot has the potential to reshape the retail landscape in Japan and beyond, and we would be thrilled to work with someone who shares that vision and excitement.
ResponsibilitiesContribute to the development of an automation system using custom hardware by building sophisticated software for the perception system, incorporating deep learning, computer vision, and control algorithmsCollaborate with other automation engineers to meet priorities and goals set by the executive teamBreak down abstract goals into actionable tasks and manage your own timeline to achieve company objectivesAnalyze problems and data accurately, and deploy robust and safe systems within the given timelinesCollect data, analyze outcomes, and continuously improve algorithms with a focus on quantitative performanceRequirements3+ years of experience developing perception systems or equivalent experience, including academic researchQuick and accurate problem-solving skillsStrong debugging and issue-monitoring capabilitiesExtensive experience in low-level computer vision, machine learning, and deep learning, including implementing state-of-the-art methods from scratchProficiency in Python and commonly used libraries, such as PyTorch, OpenCV, NumPy, SciKit.
Experience in libraries such as Open3D, Point Cloud Library, MMDetection, FastAI etc. are not mandatory but considered an advantageExperience working with RGB and depth sensors (or multi-sensor setups like RGB + LiDAR)Familiarity with data augmentation techniquesProven track record in customizing deep learning applications, such as: â Object detection/segmentationâKeypoint detectionâ3D pose estimation and reconstructionExperience with model performance optimization techniques (speed, latency, âŠ), for example: â Network layer optimizationâ TensorRT / TorchScriptPreferred SkillsProficiency in modern C++ (C++11, C++14, or newer)Experience with ROSKnowledge of camera calibration and hand-eye calibrationResearch experience in computer vision, AI, or robotics (e. g.
, publications at CVPR, ICRA)Experience in synthetic dataset generation and domain randomization techniques using Blender, OpenGL, Isaac Sim, MuJoCo, etcExperience working with manipulatorsBackground in startups or fast-paced environmentsEngineering experience in one or more of the following: âMotion planning and executionâSignal processingâNetwork communication (e. g. , pub/sub, req/rep)
Customize your resume to highlight skills and experiences relevant to this specific position.
Learn about the company's mission, values, products, and recent news before your interview.
Ensure your LinkedIn profile is complete, professional, and matches your resume information.
Prepare thoughtful questions to ask about team dynamics, growth opportunities, and company culture.