Clarkson Computer Science PhD Student Presents Work on AI-Driven Repair of Damaged Objects Using Top-Site Additive Manufacturing in Geometry Processing

Nikolas Lamb, Natasha and Sean Banerjee

Clarkson Computer Science PhD student and NSF Graduate Research Fellowship winner Nikolas Lamb presented his research on using artificial intelligence (AI) techniques, particularly computer vision and deep learning, to repair damaged objects at the Eurographics Symposium on Geometry Processing ( SGP), the highest ranking location in geometric processing techniques on July 5. Nikolas is advised on his research by Drs. Natasha Banerjee and Sean Banerjee, both associate professors in the Department of Computer Science. The results of Nikolas’ investigation of the location procedure will appear as published work in the 2022 Computer Graphics Forum, the leading journal for in-depth technical articles on computer graphics. Nikolas’s work is Clarkson’s first work to be presented at SGP and to appear in print in the Computer Graphics Forum magazine.

Nikolas’ work offers users a new approach called MendNet-an Object to recovering Deep Neural Network – which automatically synthesizes additively manufactured repair parts into 3D models of damaged objects. Nikolas’s approach to automated 3D repair synthesis is the first of its kind. Prior to Nikolas’ research, if a user’s valuable heirloom broke, with broken off parts that were damaged beyond repair, restoring the broken object was a major challenge as the user had to painstakingly model the complex geometry of the broken part in 3D. . This is something that most users probably won’t do, and it’s no surprise that a large number of damaged objects end up being thrown away, increasing environmental waste and greatly impacting sustainability.

Nikolas’ research plays a key role in furthering Clarkson’s commitment to sustainability, using AI to automate the repair process and encouraging end users to choose ‘repair’ over ‘replace’. Users can now repair broken items with minimal effort, e.g. ceramic objects such as precious crockery. Nikolas’ automated repair algorithm allows users to scan their broken item and automatically synthesize the repair part and send it to a 3D printer. Nikolas’ work takes advantage of the widespread ubiquity of 3D printers and the emergence of 3D printers for materials such as ceramics and wood in the consumer market. Linking AI, computer vision, and deep learning to the manufacturing process, Nikolas’ work significantly changes the landscape of advanced manufacturing, putting high-speed manufacturing in the hands of the average user.

Nikolas’ work has a broader impact on advancing knowledge in domains such as archaeology, anthropology and paleontology, by providing a user-friendly approach to repairing cultural heritage artifacts, damaged fossil specimens and fragmented remains, reducing the strain for researchers and they are enabled to focus on answering research questions that are important to a domain. The work also has an impact on automating repairs in dentistry and medicine.

Nikolas is a member of the Terascale All-sensing Research Studio (TARS) at Clarkson University. TARS supports the research of 15 graduate students and nearly 20 undergraduate students each semester. TARS has one of the largest high-performance computing facilities in Clarkson, with 275,000+ CUDA cores and 4,800+ Tensor cores across 50+ GPUs, and 1 petabyte (almost all!) of storage. TARS houses the Gazebo, a massively dense, multi-modal, tagless, multi-point motion capture facility for displaying multi-person interactions with 192 226FPS high-speed cameras, 16 Microsoft Azure Kinect RGB-D sensors, 12 Sierra Olympic Viento-G thermal cameras, and 16 surface electromyography (sEMG) sensors, and the Cube, a one- and two-person 3D imaging facility with 4 high-speed cameras, 4 RGB-D sensors, and 5 thermal cameras. TARS is researching the use of deep learning to understand natural multi-person interactions from massive data sets so that next-generation technologies, such as intelligent agents and robots, can be seamlessly integrated into future human environments.

The team thanks the Office of Information Technology for granting access to the ACRES GPU node with 4 V100s with 20,480 CUDA cores and 2,560 Tensor cores.

Leave a Comment

Your email address will not be published. Required fields are marked *