
Easy reconfiguration for automated part handling
In this use case, three different building blocks are defined that together could provide flexibility from a part handling perspective. The first building block revolves around the development of a system for automated part recognition and detection. In this building block, the idea is to use a vision setup that can recognize the product in a detection frame based on a knowledge base containing information needed (e.g., photos, CAD drawings, process parameters) for detection, recognition, and localization of a product. Once the identification and localization of the product are completed, this information is transferred to the next building block in which the goal is to automatically define the actions for a cobot to pick the part and perform the required operations. This is done in simulated reality by combining information about the cobot and the process combined with the information from the previous building block. Once we know which product must be handled, where this product is located, and what actions the cobot needs to perform we arrive at the third building block. In the third building block, we aim to create a translation of the simulated reality to a real-world action by turning the cobot actions as defined in the simulated reality into an actual cobot program that can be communicated to the cobot. In the end, this would mean that a product can be detected, identified, and localized after which a simulation can be created of the process to be performed after which the simulated reality can be translated to reality in order to perform the process as intended.