
RuM laboratory researches and develops technologies that allow computerized systems to perceive and interpret the world, make decisions, and act. We believe that robotics, as well as artificial perception and intelligence, will play an increasingly important role in the future of humanity – in people’s daily lives, in industry and economics, in politics. The goal of EDI RuM is to become a major player in shaping this future. A player whose research results and technologies are not only recognized by the scientific community but also contributes to people’s well-being, safety, and health.
Keywords:
- IoT, wireless sensor networks, smart sensors
- signal and image processing
- AI, computer vision, machine learning, deep neural networks
- embedded intelligence, edge and fog computing, FPGA, SoC
- automation, robot control
Part of our perception research covers different kinds of smart sensors, sensor systems, and wireless sensor networks for industrial automation, environment monitoring, and integrity control of complex systems.
One of the widely used sensors in our research is a video camera (including a stereo camera and a depth sensor). We develop and use computer vision algorithms that use camera data for classification, object detection, and frame segmentation tasks. We have developed such algorithms for intelligent transportation systems (vehicle detection and classification, number plate recognition), segmentation and classification of biomedical images, and perception systems of autonomous vehicles; however, use-cases for artificial vision are much broader.
Currently, deep neural network-based AI methods are demonstrating the best performance in many machine perception tasks. Therefore, RuM laboratory researches and uses different architectures of deep networks (convolutional neural network CNN, recurrent neural network RNN, generative adversarial network GAN). We are able to train and test deep models on EDI’s high-performance computing center which includes 12 NVIDIA Tesla k40 GPUs. Since the use of deep learning in many practical applications is hindered by the necessity of a large amount of annotated training data, RuM also researches methods for generation of synthetic training data and the use of simulated environments in training AI models.
Embedded intelligence is another significant research area of RuM lab. The processing of the sensor data is accomplished close to the sensors, so the data processing and decision-making are diverted from data centers and cloud computing to the boundary between the cloud and the physical world (edge/fog computing). Algorithms that are embedded in specialized chips (field programmable gate array FPGA, system-on-chip SoC) can be used in low-delay and energy-efficient solutions. We combine this approach with the development of computationally efficient algorithms that can be implemented on low-cost devices (e.g. video processing algorithm implemented on Raspberry Pi Zero computer), which allows us to augment the autonomy in such energy-restricted technologies as small size drones.
The robotics division of our laboratory combines the perception and embedded intelligence possibilities described above with the control of robots. It allows us to operate in such fields as autonomous platforms (driving, flying) and smart production, which brings the manufacturing process into the modern digital age (industry 4.0). For increased automation of the production process, the next generation industry robots have to be able to safely cooperate with humans and adaptively manipulate different objects in modern dynamic production conditions. Robots have to become aware of the surrounding environment and use this awareness to adapt and optimize their future actions.
Assigned Projects
-
Vision, Identification, with Z-sensing Technology and key Applications (VIZTA) #H2020
-
Framework of key enabling technologies for safe and autonomous drones applications (COMP4DRONES) #H2020
-
Artificial Intelligence for Digitizing Industry (AI4DI) #H2020
-
Development of a robotic weed management equipment (RONIN) #ESIF
-
Programmable Systems for Intelligence in Automobiles (PRYSTINE) #H2020
-
Pētījums par datorredzes paņēmienu attīstību industrijas procesu norises automatizācijai (DIPA) #ESIF
-
Mazcenas boluss spurekļa parametru monitoringam un agrīnai subakūtas spurekļa acidozes (SARA) diagnostikai govīm #ESIF
-
Intelligent Motion Control Platform for Smart Mechatronic Systems (I-MECH) #H2020
Assigned Publications
- Buls, E., Kadikis, R., Cacurs, R., & Ārents, J. (2019, March). Generation of synthetic training data for object detection in piles. In Eleventh International Conference on Machine Vision (ICMV 2018) (Vol. 11041, p. 110411Z)
- Jānis Ārents, Ričards Cacurs, Modris Greitans, "Integration of Computervision and Artificial Intelligence Subsystems with Robot Operating System Based Motion Planning for Industrial Robots", Automatic Control and Computer Sciences Journal, Volume 52, Issue 5, 2018
- Martin Dendaluce Jahnke, Francesco Cosco, Rihards Novickis, Joshué Pérez Rastelli, Vicente Gomez-Garay "Efficient Neural Network Implementations on Parallel Embedded Platforms applied to Real-Time Torque-Vectoring Optimization using Predictions for Multi-Motor Electric Vehicles", Electronics (ISSN 2079-9292)
- Kadikis, R. (2018, April). Recurrent neural network based virtual detection line. In Tenth International Conference on Machine Vision (ICMV 2017) (Vol. 10696, p. 106961V). International Society for Optics and Photonics
- SUDARS, K., 2017. Face Recognition Face2vec Based on Deep Learning: Small Database Case, Automatic Control and Computer Science. Vol.51, No.1.
- N. Dorbe, R. Kadikis, K. Nesenbergs. “Vehicle type and licence plate localisation and segmentation using FCN and LSTM”, Proceedings of New Challenges of Economic and Business Development 2017, Riga, Latvia, May 18-20, 2017, pp. 143-151
- Baums A., Gordjusins A., Kanonirs G., „Developvement of mobile research robot”, ICINCO 2012, 9th International Conference on Informatics in Control, Automation and Robotics., 28-31 July, 2012, Rome, Italy. (Electronic Proceedings), pp. 329-332.
- BAUMS, A., GORDYUSINS, A., 2015. An evaluation the motion of a reconfigurable mobile robot over a rough terrain. Automatic Control and Computer Sciences, Allerton Press, Inc., vol.49. no.5, pp. 39- 45
- Physical model for solving problems of cost-effective mobile robot development
- Kadiķis, R., & Freivalds, K. (2013, December). Vehicle classification in video using virtual detection lines. In Sixth International Conference on Machine Vision (ICMV 2013) (Vol. 9067, p. 90670Y). International Society for Optics and Photonics.
Head of Laboratory
Assigned Specialists
Dr. sc. ing. Kaspars Ozols
Deputy director of development, Senior Researcher
+371 67558161[protected]