
Operational programme “Growth and employment”. Activity 1.2.1.1 “Support for new products and technologies within the competence centres” of the Specific Aide Objective 1.2.1 “Promote investments of private sector in R&D”
project No. 1.2.1.1/16/A/002 “Competency centre for Latvian Electrical and optical manufacturing industry”
research
Pētījums par datorredzes paņēmienu attīstību industrijas procesu norises automatizācijai
Research proposer: Institute of Electronics and Computer Science (EDI)
Research part duration time: 01.11.2016. – 30.11.2018.
Project and research is financed by European Union
03.12.2018.
During the period from October 1, 2018 to November 30, 2018, EDI completed the industrial research and experimental development activities within the project “Research on the Development of Computer Vision Methods for Industrial Process Automation” (No. 1.2.1.1/16/A/002).
In the industrial research activity, the scientific team worked on training a variant of the Generative Adversarial Network (GAN) called CycleGAN to transform synthetic training data into more visually realistic data; on improving a neural-network-based object image classifier and testing the software; on integrating the convolutional neural-network-based object image classifier into the final system and performing final tests; on implementing the robot–camera calibration package in the control system, carrying out calibration, determining its accuracy and performing intensive system testing. They also carried out overall system testing and various bug fixes, system performance analysis and speed improvements, as well as sensor integration into the robot control cycle.
The project’s scientific staff worked on using Sobel filter coefficients for feature extraction in the Matlab/Octave environment, comparing them with self-developed features; obtaining depth maps and checking their consistency; creating a digital circuit based on the developed algorithm; testing LoRa antennas and comparing self-made antennas with manufactured ones. In parallel, the use of an IR sensor was studied for determining the temperature of objects manipulated by the robot; experiments were conducted to determine the optimal placement of the IR sensor; a classifier program was created for differentiating objects with different temperatures; the program’s real-time performance was verified; the implementation of the RBNN (Radially Based Nearest Neighbors) algorithm in C++ was finalized, tests were performed with different input arguments, and possibilities for improving the algorithm were studied by using other image segmentation methods that could be combined with the existing algorithm to increase result accuracy. At the same time, the overall system software code was reviewed and restructured for more convenient and transparent use, and software parts were divided into modules according to their functionality.
In the experimental development activity, the scientific team continued working on the development of the final version of the experimental system prototype software; on generating experimental software (Virtual Object Generator) for various datasets so that the convolutional neural network could successfully recognize objects and improve the Virtual Object Generator; and on researching software for training artificial neural networks on a high-performance computing server. The scientific staff converted the trained TensorFlow neural network model into the Caffe deep learning framework, implemented the network in an UltraScale+ heterogeneous embedded system, carried out output layer interpretation and output data processing to obtain the required result at the network output. In parallel, a binary protocol modem was developed for implementation in the RN2483 LoRa radio module to ensure data transmission at speeds from 18 b/s (LoRa modulation) up to 37.5 kb/s (LoRa modulation) or 300 kb/s (FSK modulation). All planned commands for full radio configuration and diagnostics, packet transmission and reception were implemented. Temperature sensors were connected to the robot and tested, a 3D design of a robot component was carried out, and software was written for using the ESP32 microprocessor’s ULP (Ultra-Low-Power) coprocessor. In the project’s final phase, the computer system of the actuator demonstrator prototype was tested under long-term load and final adjustments were made.
From November 1 to 3, 2018, researcher R. Kadiķis participated in the international conference “The 11th International Conference on Machine Vision” (ICMV 2018) in Munich, where he presented the scientific paper “Generation of Synthetic Training Data for Object Detection in Piles” on the research carried out in the project regarding the generation of synthetic training data and its application for industrial task automation.
On November 26, 2018, EDI organized the project’s closing seminar, where anyone interested could get acquainted with the project’s result – the developed set of tools for industrial process automation using 2D and 3D computer vision, as well as data processing with machine learning methods (neural networks, deep learning, etc.) to ensure object detection and recognition. The advantage of the solution is the modular architecture of system components (sensors, cameras, stereo modules, robot manipulators, etc.), which allows the use of devices from different manufacturers and models with minimal modifications. The system’s modularity is based on ROS (Robot Operating System) software.
See the project presentation HERE: presentation
The system’s operation can be viewed in this demonstration video: https://www.youtube.com/watch?v=gmsn85Qk5WY
04.10.2018.
During the period from July 1, 2018 to September 30, 2018, EDI continued to work intensively on both industrial research activities and experimental development within the project “Research on the Development of Computer Vision Methods for Industrial Process Automation” (No. 1.2.1.1/16/A/002).
In the industrial research activity, the scientific team continued work on data labeling, improving the synthetic data generator and the neural-network-based object image classifier, training the neural network model and testing the software with data obtained from the ZED camera. The team worked intensively on the development of the overall system software, reviewing and restructuring its code for more convenient and transparent use, dividing software parts into modules according to their functionality, improving the robot control software, and developing object grasping motions based on the object orientation determined by the computer vision software. At the same time, the project team worked on developing, coding, debugging, and testing a Robot Operating System (ROS) driver for a distance sensor, researching communication capabilities between the ROS system and the embedded system, integrating the Zynq UltraScale+ MPSoC board with Intel RealSense cameras, running tests on the integrated system without a graphical environment, processing Intel RealSense data, and testing data quality and 3D coordinate accuracy. They also worked on ROSbridge and OSserial verification for data transfer from the embedded system to the PC ROS, and verified the implementation of studied libraries using Python as well as their implementation in the embedded system using C.
In parallel, work was carried out on the ROS remote temperature sensor (TS) driver development, studying how the TS reading frequency affects measurement accuracy, and exploring the possibility of using the NanoLoRaWAN protocol for collecting measurements from multiple ROS sensors. The implementation possibilities of the RBNN (Radially Based Nearest Neighbors) algorithm using point cloud data obtained from Intel RealSense were studied. Material test data were compiled, and Bluetooth and LoRa connections were explored. A scientific publication titled “Generation of Synthetic Training Data for Object Detection in Piles” was prepared regarding the industrial research activities.
In the experimental development activity, the scientific team continued work on advancing the experimental system prototype software, adapting the software for use with a real robot system, ensuring cooperation between convolutional neural networks and the robot operating system, improving the calibration of the prototype system’s robot system and RGB-D depth camera, and implementing an algorithm to align the stereo camera point cloud reference system with the robot’s reference system. The team researched implementing neural network activation functions as a Look-Up Table (LUT) using the Vivado HLS development environment, and integrated a custom-built PetaLinux distribution into the Vivado SDSoC environment as a usable software platform. At the same time, the demonstration prototype was upgraded with an Intel RealSense stereo module, and system testing was carried out.
The scientific team developed an experimental stand for determining the real-time characteristics of LoRa-modulated packet transmission (transmission delay and packet duration, jitter of these parameters), created firmware for the stand’s LoRa modules, and software for capturing and analyzing stand data. At the same time, data were obtained on LoRa packet duration and jitter using the default communication parameters of the available LoRa modules. A board was designed for connecting a distance and temperature sensor to a microcontroller, and the robot was adapted for attaching the sensor board. The properties and capabilities of the microcontroller were studied, and software was written for more efficient use of it. In parallel, the researchers carried out the design of an extension to the robot prototype table and a camera mount, reviewed and tested the depth sensor module PCB, performed 3D modeling and printing of the camera positioning system, planned the construction and component assembly of the secondary end prototype computer, built parts for the robot prototype table extension and camera mount, designed a movable camera stand, and created a mounting solution to attach the camera to the stand.
Within the project, R. Cacurs and J. Ārents participated in the “ROSCon2018” conference in Madrid, Spain, from September 28 to October 1, 2018, where they learned about the latest ROS and ROS-Industrial trends and applications in industrial process automation and robotics, with the aim of successfully integrating them into the project.
In July, the project’s Deliverable 3 “Prototype of an Industrial Process Automation Actuator” was submitted.
12.07.2018.
During the period from April 1, 2018, to June 30, EDI continued to work intensively both on industrial research activities and on the experimental development project “Research on the Development of Computer Vision Techniques for Automation of Industrial Processes” (1.2.1.1/16/A/002).
In the industrial research activity, the scientific team has conducted an in-depth study of the Selective Search image segmentation method to improve both the bounding boxes containing objects and the accuracy of object localization itself, implemented improvements to the pile object localization algorithm, worked on creating a dataset acquisition program to quickly gather input data for testing machine learning-based localization models. In parallel, the project team worked intensively on improving the robot system – training and testing neural network software, adapting it to industrial robot tasks; comparing and analyzing STOMP and OMPL motion planners to select the most suitable one for controlling the UR5, developing a demonstration application for a full pick-and-place cycle for classifying 2 types of objects. Research was conducted on methods to accelerate manual data labeling and to obtain new test data. The project team worked on creating software support for the Intel RealSense depth stereo module and developing a distance sensor device, its software, and communication with a microcontroller, as well as studying and mastering the operation principles of embedded SoC (System on Chip) systems. A scientific publication titled “Integration of computer vision and artificial intelligence subsystems with Robot Operating System based motion planning for industrial robots” was prepared regarding the industrial research highlights of the project.
In the experimental development activity, the scientific team worked on improving the software of the experimental system model, installing and adapting the software for work with a real robot and a new model computer. The team supplemented the experimental system model software with classification of two types of objects (rectangular parallelepipeds and cylindrical objects) using part of the Kinect v2 data, and developed a software demonstration application for automated sorting of two types of objects. Simultaneously, improvements were made to the synthetic data generation program, images of the test dataset were labeled, and experimental studies were carried out by training the object localizer with synthetically generated data and testing the models on real object images. In parallel, the scientific team worked on designing the robot model table extension and depth sensor module mounts, planning the model computer network support, and assembling components. A scientific publication titled “Integration of computer vision and artificial intelligence subsystems with Robot Operating System based motion planning for industrial robots” was prepared regarding the experimental development highlights of the project.
On May 23, 2018, during the Electronics and Computer Science Institute “EDI Day” – an event about the institute’s scientific highlights in the field of innovation and commercialization – attendees were introduced to the DIPA project research highlights in industrial robot study and integration with computer vision solutions, and a model iteration of the robot system was demonstrated (as developed up to mid-May 2018).
04.04.2018.
During the last three months, in the project “Research on the Development of Computer Vision Techniques for Automation of Industrial Processes” (1.2.1.1/16/A/002), EDI has been working intensively on three industrial research activities:
Research on computer vision algorithms and methods, their application for industrial robot control and precise operation execution. Specifically, the scientific team worked on experimental tests with various object-containing boxes, resulting in a modified box edge localization program. As a result of the research, the code for object localization and bounding box detection was improved by experimenting with the Selective Search image segmentation algorithm. Simultaneously, the use of container software (Docker) was experimentally tested to combine the developed computer vision algorithms into a modular common code.
Exploration and implementation of deep learning methods, covering software testing and improvements based on artificial neural network training for more accurate and faster bottle orientation detection in automated robotics solutions. At the same time, the scientific team prepared software for training and testing neural networks, adapting it to the EDI robotics test platform, which will be used for bottle orientation detection in automated robotic solutions.
Study and integration of available industrial robot components with computer vision solutions, including the creation of an appropriate industrial research model. Specifically, the scientific team worked intensively on adapting software to operate with a real robot system, connecting the URSim simulation environment with ROS, and installing the necessary drivers in the ROS environment. The functionality of the developed UR5 pick-and-place application was tested in the URSim simulation environment.
In parallel with industrial research activities, work continued on experimental development activities. Over the last three months, the project’s scientific team improved the box detection algorithm implementation for the model computer, implemented the algorithm for aligning the Kinect and robot coordinate systems on the experimental model computer, and developed a mounting solution for the Kinect v2 and ZED stereo vision component modules.
04.01.2018.
During the last three months, EDI finalized the project “Research on the Development of Computer Vision Techniques for Automation of Industrial Processes” (1.2.1.1/16/A/002) 2nd deliverable: “Image Dataset for Machine Learning Method Training”.
The 2nd deliverable describes labeled datasets required to develop machine learning-based image processing methods for industrial process automation. In this project phase, possibilities were explored for using neural network-based object localization, which would allow the same program to be applied for localizing different objects. EDI researchers developed a method that automatically generates an appropriate training dataset from a few images of an object, enabling the localization system developed in the project to be quickly adapted for operation with different objects. As a result, the 2nd deliverable outcome was obtained – two training image datasets: a) manually labeled datasets, labeled by humans, and b) automatically generated labeled datasets, generated with computer assistance.
For manual labeling, labeled data were obtained for two types of objects: cans and bottles. A total of 358 examples were obtained for the can dataset and 330 examples for the bottle dataset. The advantage of manual data labeling is its universality, and a person can be instructed relatively quickly on what to label in the given images. Its drawback is that it is relatively slow; for example, obtaining this type of dataset required approximately one working day per person.
For automatic labeling, labeled data were obtained for two types of objects: cans and cylindrical objects. A total of 6000 examples were obtained for both cans and cylindrical objects. The advantage of automatic labeling is that it is much faster than manual data labeling. The generation of one such dataset was completed within a few hours. Its drawback is that this method is not universal. It is not possible to easily develop an algorithm to generate this type of data for all tasks. The obtained datasets are an important component of deep learning method research and implementation.
During the period from December 12–14, 2017, project assistant Jānis Ārents attended the “ROS – Industrial” conference in Stuttgart, Germany, to learn about the latest ROS – Industrial trends and applications in industrial process automation with the goal of successfully integrating them into the project.
Since EDI is responsible within the project for improving industrial robot control software in the ROS environment, controlling an industrial robot in a virtual environment using object data identified by computer vision, the knowledge gained at the “ROS – Industrial” conference, along with demonstrations and meetings with leading researchers and industry leaders, will ensure a more innovative approach in EDI research within the project.
12.09.2017.
The project explored image acquisition methods using Kinect V2, Bumblebee Xb3, and FLIR T650 thermal cameras. An industrial process model was created for data collection and processing, which acquires data from the Kinect V2 sensor, processes data to detect pickable objects from a randomly piled heap, performs data processing with MNT to identify which edge of the object was grasped, and simulates robot control.
Within the project, a bachelor’s thesis was developed and defended: “Exploration of the Possibilities of Integrating Industrial Robots and Computer Vision Solutions for Industrial Process Automation”.
The project results were presented at the Competence Center for the Production of Electrical and Optical Equipment in Latvia. More detailed information is available in the presentation.
31.05.2017.
The Competence Center for the Production of Electrical and Optical Equipment in Latvia was submitted the research No. 11: “Research on the Development of Computer Vision Techniques for Automation of Industrial Processes” 1st deliverable: “Industrial Process Data Collection Model for Data Processing and Monitoring”.
The deliverable describes an industrial process data collection model for data processing and monitoring. The document examines data collection methods that allow obtaining information about the course of industrial processes. It also examines data processing methods for a specific industrial process task, where objects in a randomly piled heap are detected as pickable by the robot manipulator from the heap. The developed robot manipulator control system for robot simulation is described, and it can also be used for a real robot. At the end of the document, the overall data acquisition and processing model scheme is presented.
For more detailed information about the deliverable content, contact the Institute (email: info@edi.lv, phone: 67554500).
28.02.2017.
The project implements Activity 1: “Research on Computer Vision Data Acquisition and Processing Methods and Integration with Industrial Robot Components”. An experimental system for data acquisition was created using a Kinect V2 prototype, in which parallelepiped-shaped objects were randomly placed in a 40×40×40 cm box. Using Python and the OpenCV library, an algorithm was implemented that detects the box boundaries in the depth map obtained from the Kinect sensor and determines whether there is any obstructing object between the camera and the box at that moment.
The classification module was developed in Python using a deep neural network approach and is capable of recognizing the grasping edge of a test object (a specific parallelepiped object) from a visual image. Work was carried out on studying industrial processes that could be automated using computer vision methods. In the RobotStudio simulation environment, simulations are performed for moving parallelepiped-shaped objects using the developed robot suction gripper.
In relation to the project research, a bachelor’s thesis was developed: “Exploration of the Possibilities of Integrating Industrial Robots and Computer Vision Solutions for Industrial Process Automation”.
01.11.2016.
On 27.10.2016, a contract was signed between the Electronics and Computer Science Institute and SIA “LEO PĒTĪJUMU CENTRS” for the implementation of the project “Competence Center for the Production of Electrical and Optical Equipment in Latvia” (No. 1.2.1.1/16/A/002), research No. 11: “Research on the Development of Computer Vision Techniques for Automation of Industrial Processes”, within the framework of the Operational Programme “Growth and Employment” 1.2.1. Specific Support Objective “Increase Private Sector Investment in R&D” 1.2.1.1. measure “Support for the Development of New Products and Technologies within Competence Centers” for the period 01.11.2016–30.11.2018.
The goal of the research “Development of Computer Vision Techniques for Automation of Industrial Processes” is to explore and develop a set of methods for automating industrial processes using visual information acquisition (2D, active and passive 3D, etc.) and its processing with machine learning methods (neural networks, deep learning, classifiers, etc.).
The research consists of 2 activities:
Industrial activities – research on computer vision data acquisition and processing methods and integration with industrial robot components, during which 2D and 3D data acquisition methods will be studied, implemented, and evaluated; computer vision algorithms and methods will be explored and applied for industrial robot control and precise operation execution; deep learning methods will be studied and implemented, including the exploration of deep learning architectures, representations, and control neural layers, system training with large datasets, creation of labeled datasets, and exploration and integration of available industrial robot components with computer vision solutions, including the creation of an appropriate industrial research model.
Experimental activities – during which a set of tools (image acquisition, computation, and actuation equipment along with corresponding machine learning algorithms) will be created and tested in a real-world-like environment. The toolset and computer vision algorithms will be developed solely for object recognition and localization to enable pick-and-place operations for further automation tasks, which are not covered within the project scope.
The total project funding (No. 1.2.1.1/16/A/002) is €227,529.05, most of which is co-financed by the European Union’s European Regional Development Fund.




