Contract no. 1.1.1.1/21/A/079.

The project is co-financed by REACT-EU funding for mitigating the consequences of the pandemic crisis

The project aims to apply machine learning (ML) to microfluidics based on real-time data from reflected light microscopy, TEER (Trans Epithelial Electric Resistance) and O2 biosensors to grow different cell cultures (including those obtained from patient samples) on the OOC platform.

The project is implemented by EDI in cooperation with the Latvian Biomedical Research and Study Centre (BNC) and the Limited Liability Company “Cellboxlab”

 

31.03.2022.

During first reporting period, we have developed decision tree for data classification and produced first imaging data of successful and unsuccessful OOC cultivation data for developing AI models for supervising OOC cultivation. We identified possible approaches to the generation of synthetic data for training AI models. Due to the nature of the real-world data, the most promising approach is to generate synthetic data by means of generating simple geometric shapes and subsequently deforming them. We are currently conducting a survey of literature on that topic. We have investigated integrated objective/camera units for integration in the instrument from various providers with particular focus on evaluation of image quality, digital zoom capabilities and lighting conditions. We have started working on defining the procurement specification for XYZ gantry with a suitable XY step for continuous channel imaging and Z-step for successful autofocus on the aforementioned imaging units. Additionally, during this period, we engaged in public dissemination of project topic in student council of Riga Technical University organised online interview in Spiikiizi studio, titled “What if?”.

28.06.2022.

EDI investigated state-of-the-art approaches in literature to the generation of synthetic images of biological cells by deforming simple geometric shapes. Futhermore, EDI investigated the use of generative adversarial networks (GANs) for simulation-to-real transfer, which is needed to render synthetic images more realistic.

26.09.2022.

EDI researched deep neural network model architectures to find the best fit for the AimOOC task. We also looked for models pre-trained on medical images. A classification model was trained on the first data received from the partners, concluding that additional data is needed for a good result. Therefore, options for data augmentation that will allow for synthetic multiplication of training examples were also researched and summarized.
5.12.2022.

EDI researched methods of improving and multiplying training data. We studied methods included in the various Python libraries and frameworks, choosing and testing the ones best suited to the project images. EDI continued work on the generation of synthetic medical data, looking into the use of generative adversarial networks and novel diffusion models.

1.01.2023 – 31.03.2023.

After finishing work on the data augmentation method algorithm and the development of feature extraction algorithms, EDI continues work on the classification models, dividing the cell images according to time and cell lines. Models are trained according to the decision tree prepared in the earlier stages of the project. To improve model accuracy, we are currently working on two main tasks:

  1. images are trained by dividing them into smaller units – quadrilaterals without losing image details and increasing the number of images;
  2. work is being done on generating synthetic images with the help of Stable Diffusion to increase the amount of data and, thus, the accuracy of the classifier.

During this period Cellbox Labs have produced 32 additional organ on a chip devices and started to work on Task 4.1 Firmware integration with algorithm, alongside continued work on TEER and oxygen sensors. To accelerate the initial adoption of the algorithm-driven flow-rate adjustments in the channels, a dedicated user interface for manually inputting the desired flow rate will be made. The AI-algorithm developed by EDI is currently utilising flow-rate data, as opposed to shear-force data in the image tagging, subsequently to keep in line with the syntax, flow-rate data will be used.

While LBMC was able to produce around 900 additional microscope pictures using lung on a chip, lung cancer on a chip, and gut on a chip model by visualisation system developed in WP2. Moreover, bright field (BF) pictures and Hoechst staining pictures were also produced and overlapped with BF pictures to aid the machine learning process.

Furthermore, CellboxLabs took part in two seminars organized by LBMC and Riga Technical University. The first seminar was on the role of biomedical research in the knowledge economy and biotech start-up success stories, while the second seminar was focused on the transition from university to industry. During these seminars, CellboxLabs presented their project ideas and direction to a wider scientific and industry audience.

1.04.2023 – 30.06.2023

EDI worked on improving the accuracy of image classifiers. Various deep neural network architectures were trained using both data supplemented with classic data augmentation methods, and images synthesized with diffusion models. Upon receiving new data from partners, EDI also created new datasets for training, continuously increasing the number of real images used. The trained models were validated on real images, and the results of these experiments were described in the conference paper “Synthetic Image Generation With a Fine-Tuned Latent Diffusion Model for Organ on Chip Cell Image Classification”, which was accepted at the SPA 2023: Signal Processing – Algorithms, Architectures, Arrangements, and Applications conference. To enhance synthetic data, EDI researched additional image generation approaches that still rely on diffusion models, but synthesize images not from random noise, but by modifying existing real images.
Working on integrating the trained models with the prototype developed in the project, EDI studied the parameters of the embedded computing platforms to arrive at a version usable in the prototype.

 

Project achievements

During the project, we successfully captured over 4,000 bright field images from both successful and unsuccessful experiments involving organ-on-a-chip (OOC) models derived from six distinct cell lines at various time points. These images have been made publicly available in a well-known repository, enhancing the accessibility of our data. Furthermore, our team has submitted a comprehensive article detailing our data collection methodologies and findings.

In addition to our imaging achievements, we have developed a real-time bright field imaging system specifically designed for OOC applications. This innovative system was implemented effectively throughout the project’s duration. Moreover, we created a machine learning algorithm tailored to analyze the images we acquired, as well as synthetically generated images. The development and potential applications of this algorithm are elaborated in a separate article that we have submitted.

A significant milestone was reached when we tested this algorithm and the associated automated decision-making processes on patient-derived iPSC lung-on-a-chip and lung cancer-on-a-chip models. The outcomes of these tests have led to the insightful conclusion that while the model based on stable cell lines is robust, it requires further refinement using data derived from patient models.

The culmination of our project’s achievements was showcased at three international conferences, where our team presented the results and methodologies. Furthermore, we prepared and submitted two research articles, offering a deeper dive into our findings. The project’s final report, including detailed accounts of all tasks undertaken and their outcomes, has also been compiled and submitted, marking the successful completion of this ambitious project.

 

Publications

Maksims Ivanovs, Laura Leja, Karlis Zviedris, Roberts Rimsa, Karina Narbute, Valerija Movcana, Felikss Rumnieks, Arnis Strods, Kevin Gillois, Gatis Mozolevskis, Arturs Abols and Roberts Kadiķis  “Synthetic Image Generation With a Fine-Tuned Latent Diffusion Model for Organ on Chip Cell Image Classification” 2023 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA) DOI: 10.23919/SPA59660.2023.10274460 ; https://ieeexplore.ieee.org/abstract/document/10274460

Valērija Movčana, Arnis Strods, Karīna Narbute, Fēlikss Rūmnieks, Roberts Rimša, Gatis Mozoļevskis, Maksims Ivanovs, Roberts Kadiķis, Kārlis Gustavs Zviedris, Laura Leja, Anastasija Zujeva, Tamāra Laimiņa, Arturs Abols  “Organ-on-a-Chip (OOC) Image Dataset for Machine Learning and Tissue Model Evaluation”  MDPI DATA DOI:10.3390/data9020028  https://www.mdpi.com/2306-5729/9/2/28

Conference

Maksims Ivanovs, Laura Leja, Karlis Zviedris, Roberts Rimsa, Karina Narbute, Valerija Movcana, Felikss Rumnieks, Arnis Strods, Kevin Gillois, Gatis Mozolevskis, Arturs Abols and Roberts Kadiķis  ” Synthetic Image Generation With a Fine-Tuned Latent Diffusion Model for Organ on Chip Cell Image ClassificationSPA 2023 SIGNAL PROCESSING 20TH-22ND SEPTEMBER 2023-POZNAN,POLAND

Participating scientists

    Mg. math. Laura Leja
    Mg. math. Laura Leja

    Researcher

    +371 67558147
    [protected]
    Mg. sc. cogn. Maksims Ivanovs
    Mg. sc. cogn. Maksims Ivanovs

    Researcher

    +371 67558230
    [protected]
    Mg. math. Tamāra Laimiņa

    Research assistant

    +371 67558202; +371 67558207
    [protected]
    Dr. sc. ing. Roberts Kadiķis
    Dr. sc. ing. Roberts Kadiķis

    Senior Researcher

    +371 67558134
    [protected]