11 results
Search Results
Now showing 1 - 10 of 11
Item Recognising Specific Foods in MRI Scans Using CNN and Visualisation(The Eurographics Association, 2020) Gardner, Joshua; Al-Maliki, Shatha; Lutton, Évelyne; Boué, François; Vidal, Franck; Ritsos, Panagiotis D. and Xu, KaiThis work is part of an experimental project aiming at understanding the kinetics of human gastric emptying. For this purpose magnetic resonance imaging (MRI) images of the stomach of healthy volunteers have been acquired using a state-of-art scanner with an adapted protocol. The challenge is to follow the stomach content (food) in the data. Frozen garden peas and petits pois have been chosen as experimental proof-of-concept as their shapes are well defined and are not altered in the early stages of digestion. The food recognition is performed as a binary classification implemented using a deep convolutional neural network (CNN). Input hyperparameters, here image size and number of epochs, were exhaustively evaluated to identify the combination of parameters that produces the best classification. The results have been analysed using interactive visualisation. We prove in this paper that advances in computer vision and machine learning can be deployed to automatically label the content of the stomach even when the amount of training data is low and the data imbalanced. Interactive visualisation helps identify the most effective combinations of hyperparameters to maximise accuracy, precision, recall and F1 score, leaving the end-user evaluate the possible trade-off between these metrics. Food recognition in MRI scans through neural network produced an accuracy of 0.97, precision of 0.91, recall of 0.86 and F1 score of 0.89, all close to 1.Item Projectional Radiography Simulator: an Interactive Teaching Tool(The Eurographics Association, 2019) Sujar, Aaron; Kelly, Graham; García, Marcos; Vidal, Franck; Vidal, Franck P. and Tam, Gary K. L. and Roberts, Jonathan C.Radiographers need to know a broad range of knowledge about X-ray radiography, which can be specific to each part of the body. Due to the harmfulness of the ionising radiation used, teaching and training using real patients is not ethical. Students have limited access to real X-ray rooms and anatomic phantoms during their studies. Books, and now web apps, containing a set of static pictures are then often used to illustrate clinical cases. In this study, we have built an Interactive X-ray Projectional Simulator using a deformation algorithm with a real-time X-ray image simulator. Users can load various anatomic models and the tool enables virtual model positioning in order to set a specific position and see the corresponding X-ray image. It allows teachers to simulate any particular X-ray projection in a lecturing environment without using real patients and avoiding any kind of radiation risk. This tool also allows the students to reproduce the important parameters of a real X-ray machine in a safe environment. We have performed a face and content validation in which our tool proves to be realistic (72% of the participants agreed that the simulations are visually realistic), useful (67%) and suitable (78%) for teaching X-ray radiography.Item Where's Wally? A Machine Learning Approach(The Eurographics Association, 2021) Barthelmes, Tobias; Vidal, Franck; Xu, Kai and Turner, MartinObject detection has been implemented in all sorts of real-life scenarios such as facial recognition, traffic monitoring and medical imaging but the research that has gone into object detection in drawings and cartoons is not nearly as extensive. The Where's Wally puzzle books give a good opportunity to implement some of these real-life methods into the fictional world. The Wally detection framework proposed is composed of two stages: i) a Haar-cascade classifier based on the Viola-Jones framework, which detects possible candidates from a scenario from the Where'sWally books, and ii) a lightweight convolutional neural network (CNN) that re-labels the objects detected by the cascade classifier. The cascade classifier was trained on 85 positive images and 172 negative images. It was then applied to 12 test images, which produced over 400 false positives. To increase the accuracy of the models, hard negative mining was implemented. The framework achieved a recall score of 84.61% and an F1 score of 78.54%. Improvements could be made to the training data or the CNN to further increase these scores.Item CGVC 2019: Frontmatter(Eurographics Association, 2019) Vidal, Franck; Tam, Gary K. L.; Roberts, Jonathan C.; Vidal, Franck P. and Tam, Gary K. L. and Roberts, Jonathan C.Item Evolutionary Interactive Analysis of MRI Gastric Images Using a Multiobjective Cooperative-coevolution Scheme(The Eurographics Association, 2018) Al-Maliki, Shatha F.; Lutton, Évelyne; Boué, François; Vidal, Franck; {Tam, Gary K. L. and Vidal, FranckIn this study, we combine computer vision and visualisation/data exploration to analyse magnetic resonance imaging (MRI) data and detect garden peas inside the stomach. It is a preliminary objective of a larger project that aims to understand the kinetics of gastric emptying. We propose to perform the image analysis task as a multi-objective optimisation. A set of 7 equally important objectives are proposed to characterise peas. We rely on a cooperation co-evolution algorithm called 'Fly Algorithm' implemented using NSGA-II. The Fly Algorithm is a specific case of the 'Parisian Approach' where the solution of an optimisation problem is represented as a set of individuals (e.g. the whole population) instead of a single individual (the best one) as in typical evolutionary algorithms (EAs). NSGA-II is a popular EA used to solve multi-objective optimisation problems. The output of the optimisation is a succession of datasets that progressively approximate the Pareto front, which needs to be understood and explored by the end-user. Using interactive Information Visualisation (InfoVis) and clustering techniques, peas are then semi-automatically segmented.Item Computer Graphics and Visual Computing (CGVC) 2017: Frontmatter(Eurographics Association, 2017) Wan, Tao Ruan; Vidal, Franck; Tao Ruan Wan and Franck VidalItem Simulating Dynamic Ecosystems with Co-Evolutionary Agents(The Eurographics Association, 2020) Ferguson, Gary; Vidal, Franck; Ritsos, Panagiotis D. and Xu, KaiAs video games grow in complexity and require increasingly large and immersive environments, there is a need for more believable and dynamic characters not controlled by the player, known as non-player character (NPC). Video game developers will often face the challenge of designing these NPCs in a time efficient manner. We propose an agent-based Cooperative Co-evolution Algorithm (CCEA) where NPCs are implemented as artificial life (AL) agents that are created through an evolutionary process based on simple rules. The virtual environment can be filled with a range of interesting agents, each acting independently from one another, to fulfil their own wants and needs. The proposed middleware framework is suitable for computer animation of NPCs and the development of video games, especially where swarm intelligence is simulated. We proved that agents implemented with a very limited number of variables making up their genome can be successfully integrated in a co-evolutionary multi-agent system (CoEMAS). Results showed promising levels of speciation and interesting emergent and plausible behaviours amongst the agents.Item Frontmatter: Computer Graphics and Visual Computing (CGVC)(The Eurographics Association, 2018) Tam, Gary K. L.; Vidal, Franck; Tam, Gary K. L. and Vidal, FranckItem Interactive Visualisation of the Food Content of a Human Stomach in MRI(The Eurographics Association, 2022) Spann, Conor; Al-Maliki, Shatha; Boué, François; Lutton, Évelyne; Vidal, Franck; Peter Vangorp; Martin J. TurnerMost medical imaging studies into human digestion focus on the organs themselves and neglect the content under digestion. Instead, analysing food inside digestive organs and any subsequent motion can provide valuable information about the digestive tract. This study is part of a larger project, with previous work done to automatically detect peas in a human stomach from MRI scans but it produced too many false positives. Our study therefore aims to accurately visualise peas in a human stomach whilst also providing facilities to correct the mistakes made by the previous pea detection. Our solution is a visualisation and correction tool split into 2D and 3D visualisation areas. The 2D areas show three sequential stomach slices with detected peas as green circles and allows the user to correct the pea detection. Peas can be added, removed or marked as unsure. The 3D area shows a Marching Cubes rendering of the stomach with spherical glyphs as the peas. Due to the way the data was acquired, some pea motion was also visualised. Aside from difficulties interpreting the data due to acquisition artefacts, our tool was found to be very easy to use, with some minor improvement suggestions for interacting with the images. Overall, the software achieved its aims of visualising the peas and stomach whilst also providing methods to correct the pea data. Future work will look into improving the pea detection and more work into following the pea motion.Item gVirtualXRay: Virtual X-Ray Imaging Library on GPU(The Eurographics Association, 2017) Sujar, Aaron; Meuleman, Andreas; Villard, Pierre-Frederic; García, Marcos; Vidal, Franck; Tao Ruan Wan and Franck VidalWe present an Open-source library called gVirtualXRay to simulate realistic X-ray images in realtime. It implements the attenuation law (also called Beer-Lambert) on GPU. It takes into account the polychromatism of the beam spectra as well as the finite size of X-ray tubes. The library is written in C++ using modern OpenGL. It is fully portable and works on most common desktop/laptop computers. It has been tested on MS Windows, Linux, and Mac OS X. It supports a wide range of windowing solutions, such as FLTK, GLUT, GLFW3, Qt4, and Qt5. The library also offers realistic visual rendering of anatomical structures, including bones, liver, diaphragm and lungs. The accuracy of the X-ray images produced by gVirtualXRay's implementation has been validated using Geant4, a well established state-of-the-art Monte Carlo simulation toolkit developed by CERN. gVirtualXRay can be used in a wide range of applications where fast and accurate X-ray simulations from polygon meshes are needed, e.g. medical simulators for training purposes, simulation of tomography data acquisition with patient motion to include artefacts in reconstructed CT images, and deformable registration. Our application example package includes real-time respiration and X-ray simulation, CT acquisition and reconstruction, and iso-surfacing of implicit functions using Marching Cubes.