7 results
Search Results
Now showing 1 - 7 of 7
Item Recognising Specific Foods in MRI Scans Using CNN and Visualisation(The Eurographics Association, 2020) Gardner, Joshua; Al-Maliki, Shatha; Lutton, Évelyne; Boué, François; Vidal, Franck; Ritsos, Panagiotis D. and Xu, KaiThis work is part of an experimental project aiming at understanding the kinetics of human gastric emptying. For this purpose magnetic resonance imaging (MRI) images of the stomach of healthy volunteers have been acquired using a state-of-art scanner with an adapted protocol. The challenge is to follow the stomach content (food) in the data. Frozen garden peas and petits pois have been chosen as experimental proof-of-concept as their shapes are well defined and are not altered in the early stages of digestion. The food recognition is performed as a binary classification implemented using a deep convolutional neural network (CNN). Input hyperparameters, here image size and number of epochs, were exhaustively evaluated to identify the combination of parameters that produces the best classification. The results have been analysed using interactive visualisation. We prove in this paper that advances in computer vision and machine learning can be deployed to automatically label the content of the stomach even when the amount of training data is low and the data imbalanced. Interactive visualisation helps identify the most effective combinations of hyperparameters to maximise accuracy, precision, recall and F1 score, leaving the end-user evaluate the possible trade-off between these metrics. Food recognition in MRI scans through neural network produced an accuracy of 0.97, precision of 0.91, recall of 0.86 and F1 score of 0.89, all close to 1.Item Projectional Radiography Simulator: an Interactive Teaching Tool(The Eurographics Association, 2019) Sujar, Aaron; Kelly, Graham; García, Marcos; Vidal, Franck; Vidal, Franck P. and Tam, Gary K. L. and Roberts, Jonathan C.Radiographers need to know a broad range of knowledge about X-ray radiography, which can be specific to each part of the body. Due to the harmfulness of the ionising radiation used, teaching and training using real patients is not ethical. Students have limited access to real X-ray rooms and anatomic phantoms during their studies. Books, and now web apps, containing a set of static pictures are then often used to illustrate clinical cases. In this study, we have built an Interactive X-ray Projectional Simulator using a deformation algorithm with a real-time X-ray image simulator. Users can load various anatomic models and the tool enables virtual model positioning in order to set a specific position and see the corresponding X-ray image. It allows teachers to simulate any particular X-ray projection in a lecturing environment without using real patients and avoiding any kind of radiation risk. This tool also allows the students to reproduce the important parameters of a real X-ray machine in a safe environment. We have performed a face and content validation in which our tool proves to be realistic (72% of the participants agreed that the simulations are visually realistic), useful (67%) and suitable (78%) for teaching X-ray radiography.Item Where's Wally? A Machine Learning Approach(The Eurographics Association, 2021) Barthelmes, Tobias; Vidal, Franck; Xu, Kai and Turner, MartinObject detection has been implemented in all sorts of real-life scenarios such as facial recognition, traffic monitoring and medical imaging but the research that has gone into object detection in drawings and cartoons is not nearly as extensive. The Where's Wally puzzle books give a good opportunity to implement some of these real-life methods into the fictional world. The Wally detection framework proposed is composed of two stages: i) a Haar-cascade classifier based on the Viola-Jones framework, which detects possible candidates from a scenario from the Where'sWally books, and ii) a lightweight convolutional neural network (CNN) that re-labels the objects detected by the cascade classifier. The cascade classifier was trained on 85 positive images and 172 negative images. It was then applied to 12 test images, which produced over 400 false positives. To increase the accuracy of the models, hard negative mining was implemented. The framework achieved a recall score of 84.61% and an F1 score of 78.54%. Improvements could be made to the training data or the CNN to further increase these scores.Item CGVC 2019: Frontmatter(Eurographics Association, 2019) Vidal, Franck; Tam, Gary K. L.; Roberts, Jonathan C.; Vidal, Franck P. and Tam, Gary K. L. and Roberts, Jonathan C.Item Simulating Dynamic Ecosystems with Co-Evolutionary Agents(The Eurographics Association, 2020) Ferguson, Gary; Vidal, Franck; Ritsos, Panagiotis D. and Xu, KaiAs video games grow in complexity and require increasingly large and immersive environments, there is a need for more believable and dynamic characters not controlled by the player, known as non-player character (NPC). Video game developers will often face the challenge of designing these NPCs in a time efficient manner. We propose an agent-based Cooperative Co-evolution Algorithm (CCEA) where NPCs are implemented as artificial life (AL) agents that are created through an evolutionary process based on simple rules. The virtual environment can be filled with a range of interesting agents, each acting independently from one another, to fulfil their own wants and needs. The proposed middleware framework is suitable for computer animation of NPCs and the development of video games, especially where swarm intelligence is simulated. We proved that agents implemented with a very limited number of variables making up their genome can be successfully integrated in a co-evolutionary multi-agent system (CoEMAS). Results showed promising levels of speciation and interesting emergent and plausible behaviours amongst the agents.Item Interactive Visualisation of the Food Content of a Human Stomach in MRI(The Eurographics Association, 2022) Spann, Conor; Al-Maliki, Shatha; Boué, François; Lutton, Évelyne; Vidal, Franck; Peter Vangorp; Martin J. TurnerMost medical imaging studies into human digestion focus on the organs themselves and neglect the content under digestion. Instead, analysing food inside digestive organs and any subsequent motion can provide valuable information about the digestive tract. This study is part of a larger project, with previous work done to automatically detect peas in a human stomach from MRI scans but it produced too many false positives. Our study therefore aims to accurately visualise peas in a human stomach whilst also providing facilities to correct the mistakes made by the previous pea detection. Our solution is a visualisation and correction tool split into 2D and 3D visualisation areas. The 2D areas show three sequential stomach slices with detected peas as green circles and allows the user to correct the pea detection. Peas can be added, removed or marked as unsure. The 3D area shows a Marching Cubes rendering of the stomach with spherical glyphs as the peas. Due to the way the data was acquired, some pea motion was also visualised. Aside from difficulties interpreting the data due to acquisition artefacts, our tool was found to be very easy to use, with some minor improvement suggestions for interacting with the images. Overall, the software achieved its aims of visualising the peas and stomach whilst also providing methods to correct the pea data. Future work will look into improving the pea detection and more work into following the pea motion.Item Registration of 3D Triangular Models to 2D X-ray Projections Using Black-box Optimisation and X-ray Simulation(The Eurographics Association, 2019) Wen, Tianci; Mihail, Radu; Al-maliki, shatha; Letang, Jean; Vidal, Franck; Vidal, Franck P. and Tam, Gary K. L. and Roberts, Jonathan C.Registration has been studied extensively for the past few decades. In this paper we propose to solve the registration of 3D triangular models onto 2D X-ray projections. Our approach relies extensively on global optimisation methods and fast X-ray simulation on GPU. To evaluate our pipeline, each optimisation is repeated 15 times to gather statistically meaningful results, in particular to assess the reproducibility of the outputs.We demonstrate the validity of our approach on two registration problems: i) 3D kinematic configuration of a 3D hand model, i.e. the recovery of the original hand pose from a postero-anterior (PA) view radiograph. The performance is measured by Mean Absolute Error (MAE). ii) Automatic estimation of the position and rigid transformation of geometric shapes (cube and cylinders) to match an actual metallic sample made of Ti/SiC fibre composite with tungsten (W) cores. In this case the performance is measured in term of F-score (86%), accuracy (95%), precision (75%), recall (100%), and true negative rate (94%). Our registration framework is successful for both test-cases when using a suitable optimisation algorithm.