RuM laboratory researches and develops technologies that allow computerized systems to perceive and interpret the world, make decisions, and act. We believe that robotics, as well as artificial perception and intelligence, will play an increasingly important role in the future of humanity – in daily lives, in industry and economy, in politics. The goal of EDI RuM is to become a major player in shaping this future. A player whose research results and technologies are not only recognized by the scientific community but also contribute to the well-being, safety, and health of mankind.
- IoT, wireless sensor networks, smart sensors
- signal and image processing
- AI, xAI, computer vision, machine learning, deep neural networks
- embedded intelligence, accelerators, edge and fog computing, FPGA, SoC
- automation, industrial robotics, real-time control, mobile robots
Part of our perception research covers different kinds of smart sensors, sensor systems, and wireless sensor networks for industrial automation, environment monitoring, and integrity control of complex systems.
We also develop and adopt computer vision algorithms for classification, object detection, and image segmentation tasks. We have developed such algorithms for autonomous vehicles, biomedicine, intelligent transportation systems (vehicle and pedestrian detection and classification, number plate recognition), agriculture (detection of crops and weeds), and video analysis. However, the possible use-cases for our AI-enabled computer vision are much broader.
Currently, deep neural network-based methods are demonstrating the best performance in many machine perception tasks. Therefore, RuM laboratory explores and applies different network models, including CNN, RNN, and GAN, which we train and test on EDI’s high-performance computing center. Since the use of deep learning in many practical applications is hindered by the necessity of a large amount of annotated training data, RuM also researches methods for the generation of synthetic training data and the use of simulated environments in training AI models. Also, our interest in medical image analysis has led us to explore the explainable AI field and bio-inspired learning models.
Embedded intelligence is another significant research area of the RuM lab. We excel at utilizing a variety of design abstractions to push performance limits to the edge. We combine our expertise to develop industrial-grade sensing nodes with very low-latency (<500μs) and advance control algorithms for autonomous driving. For achieving high-performance low-power solutions we design accelerators using specialized chips – field-programmable gate arrays (FPGAs). Our accelerators range from low-level image preprocessing and feature extraction to the DNNs and acceleration of video-based localization for constrained systems. We design Linux-based autonomous flight platforms using modern SoCs (Systems-on-Chip) for future drones, which will be capable of localization without GPS and ground stations.
The robotics division of our laboratory combines the perception and embedded intelligence possibilities described above with the control of robots. It allows us to operate with such technologies as autonomous platforms (driving, flying) and smart production robots, which bring the manufacturing process into the modern digital age (industry 4.0). For increased automation of the production process, the next generation industry robots have to safely cooperate with humans and adaptively manipulate different objects in modern dynamic production conditions. Robots have to become aware of the surrounding environment and use this awareness to adapt and optimize their future actions.
- Vision, Identification, with Z-sensing Technology and key Applications (VIZTA) #H2020
- Framework of key enabling technologies for safe and autonomous drones applications (COMP4DRONES) #H2020
- Artificial Intelligence for Digitizing Industry (AI4DI) #H2020
- Silicon IP Design House (SilHouse) part 2 #H2020
- Digital Technologies, Advanced Robotics and increased Cyber-security for Agile Production in Future European Manufacturing Ecosystems (TRINITY) #H2020
- Programmable Systems for Intelligence in Automobiles (PRYSTINE) #H2020
- Development of a robotic weed management equipment (RONIN) #ESIF
- Intelligent Motion Control Platform for Smart Mechatronic Systems (I-MECH) #H2020
- Artifical intelligence for more precise diagnostics (AI4DIAG) #ESIF
- Real time stereo-vision depth sensing sensor (STRIVE) #ESIF
- Deep neural network method for improve the accuracy of tracking and classification of vehicles registration plates (DziNTA) #ESIF
- Mazcenas boluss spurekļa parametru monitoringam un agrīnai subakūtas spurekļa acidozes (SARA) diagnostikai govīm #ESIF
- Pētījums par datorredzes paņēmienu attīstību industrijas procesu norises automatizācijai (DIPA) #ESIF
- Novickis, R., Levinskis, A., Kadiķis, R., Feščenko. V., Ozols, K. (2020). Functional architecture for autonomous driving and its implementation. 17th Biennial Baltic Electronics Conference (BEC2020), Tallinn, Estonia.
- Justs, D., Novickis, R., Ozols, K., Greitāns M. (2020). Bird's-eye view image acquisition from simulated scenes using geometric inverse perspective mapping. 17th Biennial Baltic Electronics Conference (BEC2020), Tallinn, Estonia.
- Martin Dendaluce Jahnke, Francesco Cosco, Rihards Novickis, Joshué Pérez Rastelli, Vicente Gomez-Garay "Efficient Neural Network Implementations on Parallel Embedded Platforms applied to Real-Time Torque-Vectoring Optimization using Predictions for Multi-Motor Electric Vehicles", Electronics (ISSN 2079-9292)
- Buls, E., Kadikis, R., Cacurs, R., & Ārents, J. (2019, March). Generation of synthetic training data for object detection in piles. In Eleventh International Conference on Machine Vision (ICMV 2018) (Vol. 11041, p. 110411Z)
- Jānis Ārents, Ričards Cacurs, Modris Greitans, "Integration of Computervision and Artificial Intelligence Subsystems with Robot Operating System Based Motion Planning for Industrial Robots", Automatic Control and Computer Sciences Journal, Volume 52, Issue 5, 2018
- R.Novickis, D.J. Justs, K.Ozols and M.Greitāns "An Approach of Feed-Forward Neural Network Throughput-Optimized Implementation in FPGA", Electronics journal: Special Issue Advanced AI Hardware Designs Based on FPGAs, 2020
- Martin Čech, Arend-Jan Beltman , Kaspars Ozols. I-MECH – Smart System Integration for Mechatronic Applications. Published in: 2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA)
- Kadikis, R. (2018, April). Recurrent neural network based virtual detection line. In Tenth International Conference on Machine Vision (ICMV 2017) (Vol. 10696, p. 106961V). International Society for Optics and Photonics
- SUDARS, K., 2017. Face Recognition Face2vec Based on Deep Learning: Small Database Case, Automatic Control and Computer Science. Vol.51, No.1.
- N. Dorbe, R. Kadikis, K. Nesenbergs. “Vehicle type and licence plate localisation and segmentation using FCN and LSTM”, Proceedings of New Challenges of Economic and Business Development 2017, Riga, Latvia, May 18-20, 2017, pp. 143-151
- Baums A., Gordjusins A., Kanonirs G., „Developvement of mobile research robot”, ICINCO 2012, 9th International Conference on Informatics in Control, Automation and Robotics., 28-31 July, 2012, Rome, Italy. (Electronic Proceedings), pp. 329-332.
- BAUMS, A., GORDYUSINS, A., 2015. An evaluation the motion of a reconfigurable mobile robot over a rough terrain. Automatic Control and Computer Sciences, Allerton Press, Inc., vol.49. no.5, pp. 39- 45
- Physical model for solving problems of cost-effective mobile robot development
- Kadiķis, R., & Freivalds, K. (2013, December). Vehicle classification in video using virtual detection lines. In Sixth International Conference on Machine Vision (ICMV 2013) (Vol. 9067, p. 90670Y). International Society for Optics and Photonics.
- Druml, N., Debaillie, B., Anghel, A., Ristea, N. C. (2020). Programmable Systems for Intelligence in Automobiles (PRYSTINE): Technical Progress after Year 2, 2020 23rd Euromicro Conference on Digital System Design (DSD). doi: 10.1109/DSD51259.2020.00065
- Sudars, K., Jasko, J., Namatevs I., Ozola L., Badaukis, N. (2020). Dataset of annotated food crops and weed images for robotic computer vision control, Data in Brief, 31. doi:10.1016/j.dib.2020.105833
- Namatevs, I., Sudars, K., Polaka, I., Automatic data labeling by neural networks for the counting of objects in videos, Procedia Computer Science, Vol.149, pp. 151-158, 2019
Head of Laboratory
Deputy director of development, Senior Researcher+371 67558161