Frei zugänglich
Refine
Year of publication
Document Type
- Master's Thesis (19)
- Bachelor Thesis (4)
- Preprint (2)
- Diploma Thesis (1)
Language
- English (26) (remove)
Has Fulltext
- yes (26)
Is part of the Bibliography
- no (26)
Keywords
- Neuronales Netz (3)
- Application (2)
- Bildgebung (2)
- MRI (2)
- Maschinelles Lernen (2)
- Simulation (2)
- Software (2)
- stroke (2)
- web application (2)
- 3D Slicer (1)
Institute
- Informatik (13)
- Medizinische Informatik (12)
The e-commerce turnover has a constant growth rate of about 10%. An additional increase
in complexity and traffic spikes clarify the need for a scalable software architecture to prevent
a potential technical debt, higher financial cost, longer maintenance, or a reduced reliability.
Due to the fact, that existing approaches like the Palladio Approach require a high modelling
overhead and the importance of dropping this overhead was identified this master thesis is
focused on the modelling and simulation of e-commerce web application architectures using
a high-level approach to provide a faster, but possibly more inaccurate prediction of the
scalability.
This is done by the usage of the Design Science Research Process as a frame, a scientific
literature review for use of the existing knowledge base and the Conical Methodology for the
artefact creation. The artefact is a graphical model which is evaluated using a simulation
developed with Python and its framework SimPy. For model creation and evaluation a total
of twelve papers investigating the scalability of e-commerce web application architectures is
split into a test and train group. The training group and parts of the scientific research are
used to identify the components load balancer, application server, web tier, ERP system,
legacy system and database as well as some general characteristics that need to be considered.
The components with the most modelling variables are the application server and web
tier with a total of thirteen, while the ERP and legacy system only required five.
The model is evaluated using a total of three papers from the test group, where an average
throughput error of 5.78% and a response time error of 46.55% or 26.46% was identified. An
additional evaluation based on two non-e-commerce architectures shows the usability of the
model for other types of architectures. Even though the average error gives the impression,
that the model is not providing a good estimation, the graphical results show, that the model
and its simulation can be used to provide a faster scalability prediction. The model is least
accurate for the prediction of the situation, where the response time increases exponentially,
as this is the point, where variables, only accountable for some percentage and thus ignored
for the model, have the highest influence.
Future research can be found in the extension of the model by either adding or investigating
additional components, adding features ignored within this work or applying it to other
types of web application architectures. Additionally, both the low-level and the high-level
approaches can be brought together to combine the advantages from both approaches.
Development and validation of a neural network for adaptive gait cycle detection from kinematic data
(2020)
(1) Background: Instrumented gait analysis is a tool for quantification of the different
aspects of the locomotor system. Gait analysis technology has substantially evolved over
the last decade and most modern systems provide real-time capability. The ability to
calculate joint angles with low delays paves the way for new applications such as real-time
movement feedback, like control of functional electrical stimulation in the rehabilitation
of individuals with gait disorders. For any kind of therapeutic application, the timely
determination of different gait phases such as stance or swing is crucial. Gait phases are
usually estimated based on heuristics of joint angles or time points of certain gait events.
Such heuristic approaches often do not work properly in people with gait disorders due to
the greater variability of their pathological gait pattern. To improve the current state-ofthe-
art, this thesis aims to introduce a data-driven approach for real-time determination
of gait phases from kinematic variables based on long short-term memory recurrent neural
networks (LSTM RNNs).
(2) Methods: In this thesis, 56 measurements with gait data of 11 healthy subjects,
13 individuals with incomplete spinal cord injury and 10 stroke survivors with walking
speeds ranging from 0.2 m
s up to 1 m
s were used to train the networks. Each measurement
contained kinematic data from the corresponding subject walking on a treadmill for 90
seconds. Kinematic data was obtained by measuring the positions of reflective markers on
body landmarks (Helen Hayes marker set) with a sample rate of 60Hz. For constructing a
ground truth, gait data was annotated manually by three raters. Two approaches, direct
regression of gait phases and estimation via detection of the gait events Initial Contact
and Final Contact were implemented for evaluation of the performance of LSTM RNNs.
For comparison of performance, the frequently cited coordinate- and velocity-based event
detection approaches of Zeni et al. were used. All aspects of this thesis have been
implemented within MATLAB Version 9.6 using the Deep Learning Toolbox.
(3) Results: The mean time difference between events annotated by the three raters
was −0.07 ± 20.17ms. Correlation coefficients of inter-rater and intra-rater reliability
yielded mainly excellent or perfect results. For detection of gait events, the LSTM RNN
algorithm covered 97.05% of all events within a scope of 50ms. The overall mean time
difference between detected events and ground truth was −11.62 ± 7.01ms. Temporal
differences and deviations were consistently small over different walking speeds and gait
pathologies. Mean time difference to the ground truth was 13.61 ± 17.88ms for the
coordinate-based approach of Zeni et al. and 17.18 ± 15.67ms for the velocity-based
approach. For estimation of gait phases, the gait phase was determined as a percentage.
Mean squared error to the ground truth was 0.95 ± 0.55% for the proposed algorithm
using event detection and 1.50 ± 0.55% for regression. For the approaches of Zeni et al.,
mean squared error was 2.04±1.23% for the coordinate-based approach and 2.24±1.34%
for the velocity-based approach. Regarding mean absolute error to the ground truth, the
proposed algorithm achieved a mean absolute error of 1.95±1.10% using event detection
and one of 7.25 ± 1.45% using regression. Mean absolute error for the coordinate-based
approach of Zeni et al. was 4.08±2.51% and 4.50±2.73% for the velocity-based approach.
(4) Conclusion: The newly introduced LSTM RNN algorithm offers a high recognition
rate of gait events with a small delay. Its performance outperforms several state-of-theart
gait event detection methods while offering the possibility for real-time processing
and high generalization of trained gait patterns. Additionally, the proposed algorithm
is easy to integrate into existing applications and contains parameters that self-adapt
to individuals’ gait behavior to further improve performance. In respect to gait phase
estimation, the performance of the proposed algorithm using event detection is in line
with current wearable state-of-the-art methods. Compared with conventional methods,
performance of direct regression of gait phases is only moderate. Given the results,
LSTM RNNs demonstrate feasibility regarding event detection and are applicable for
many clinical and research applications. They may be not suitable for the estimation
of gait phases via regression. For LSTM RNNs, it can be assumed, that with a more
optimal configuration of the networks, a much higher performance is achieved.
Implementation of an interactive pattern mining framework on electronic health record datasets
(2019)
Large collections of electronic patient records contain a broad range of clinical information highly relevant for data analysis. However, they are maintained primarily for patient administration, and automated methods are required to extract valuable knowledge for predictive, preventive, personalized and participatory medicine. Sequential pattern mining is a fundamental task in data mining which can be used to find statistically relevant, non-trivial temporal dependencies of events such as disease comorbidities. This works objective is to use this mining technique to identify disease associations based on ICD-9-CM codes data of the entire Taiwanese population obtained from Taiwan’s National Health Insurance Research Database.
This thesis reports the development and implementation of the Disease Pattern Miner – a pattern mining framework in a medical domain. The framework was designed as a Web application which can be used to run several state-of-the-art sequence mining algorithms on electronic health records, collect and filter the results to reduce the number of patterns to a meaningful size, and visualize the disease associations as an interactive model in a specific population group. This may be crucial to discover new disease associations and offer novel insights to explain disease pathogenesis. A structured evaluation of the data and models are required before medical data-scientist may use this application as a tool for further research to get a better understanding of disease comorbidities.
In this bachelor thesis, different models for predicting the influenza virus are
examined in more detail.
The focus is on epidemiological compartmental models, as well as on different
Machine Learning approaches.
In particular, the basics chapter presents the SIR model and its various extensions.
Furthermore, Deep Learning and Social Network approaches are
investigated and the applied methods of a selected article are analysed in more
detail.
The practical part of this work consists in the implementation of a Multiple
Linear Regression model and an Artificial Neural Network. For the development
of both models the programming language Python was chosen using the
Deep Learning Framework Keras.
Tests were performed with real data from the Réseau Sentinelles, a French
organisation for monitoring national health.
The results of the tests show that the Neural Network is able to make better
predictions than the Multiple Linear Regression model.
The discussion shows ideas for improving influenza prediction including the
establishment of a worldwide collaboration between the surveillance centres as
well as the consolidation of historical data with real-time social media data.
Therefore, this work consists of a state-of-the art of models regarding the
spread of influenza virus, the development and comparison of several models
programmed in Python, evaluated on real data.
Alzheimer’s Disease affects millions of people worldwide, but till today, the gold standard
for definitive diagnosis of this disease is a biopsy. Nevertheless, with the progress
of the disease, a volume loss in the Hippocampus can be observed. Therefore, good
segmentation methods are crucial to facilitate quantification of this loss.
The focus of this work is on the development of a Machine Learning algorithm, more
precisely a Generative Adversarial Network, for the automated segmentation of the
human Hippocampus and its substructures in Magnetic Resonance Images. In particular,
the task is to determine if the integration of a pre-trained network that generates
segmentations into a Generative Adversarial Network scheme can improve generated
segmentations. In this context, a segmentation network in form of a U-net corresponds
to the generator. The discriminator is developed separately and merged in a second
step with the generator for combined training.
With a literature review regarding the automated segmentation of the Hippocampus,
current methods in this field and their medical and technological basics were identified.
The datasets were preprocessed to make them suitable for the use in a neural
network. In the training process, the generator was trained first until convergence.
Then, the Generative Adversarial Network including the pre-trained generator was
trained. The outcomes were evaluated via cross-validation in two different datasets
(Kulaga-Yoskovitz and Winterburn). The Generative Adversarial Network scheme
was tested regarding different architectural and training aspects, including the usage
of skip-connections and a combined loss function.
The best results were achieved in the Kulaga-Yoskovitz dataset with a Dice coefficient
of 90.84 % after the combined training of generator and discriminator with a joined
loss function. This improves the current state of the art method in the same task and
dataset with a Dice index of 88.79 % by Romero [Rom17]. Except of two cases in the
Winterburn dataset, the proposed combined method could always improve the Dice
results after the training of only the generator, even though only by a small amount.
Medical imaging produces many images every day in clinical routine. Keeping up with the
daily image analysis task and this vast amount of data is quite a challenge for radiologists.
However, these analysis tasks can be automated with well-proven automatic segmentation
methods. Segmentation reviewing of an expert is necessary because learningbased
automatic segmentation methods may not perform well on exceptional image
data. Creating valid segmentations by reviewing them also improve the learning-based
methods.
Combining established standards with modern technologies creates a flexible environment
to efficiently evaluate multiple segmentation algorithm outputs based on different metrics
and visualizations and report these analysis results back to a clinical system environment.
The presented software system can inspect such quantitative results in a fast and intuitive
way, potentially improving the daily repetitive segmentation review and rework of a
research radiologist. The presented system is designed to be integrated into a virtual
distributed computing environment with other systems and analysis methods. Critical
factors for this particular environment are the handling of many patient data and routine
automated analysis with state of the art technology.
First experiments show that the time to review automatic segmentation results can be
roughly divided in half while the confidence of the radiologist is enhanced. The system
is also able to highlight individual slices which are essential for the expert’s review
decision. For this highlighting, different metric scores are compared and evaluated.
Spinodal Zr0.4Hf0.6Ni1.15Sn half-Heusler thermoelectrics are synthesized and aged. The complex microstructure due to the spinodal decomposition is investigated by powder X-ray diffraction (XRD), scanning electron microscope (SEM) and transmission electron microscope (TEM). By Rietveld refinements, it is confirmed that excess Ni atoms in the arc-melted and spark-plasma-sintered half-Heusler matrix prefer to form nanoscale/ submicron Heusler precipitates via spinodal decomposition and growth during the aging at 1173 K. Such phase separation changes the band gap of semiconductors, reduces the thermal conductivity drastically and improves the performance of thermoelectrics. As a result, a more than 50 % improvement of the ZT value on the unaged specimens was achieved.
Initial results of an ongoing research in the field of reactive mobile autonomy are presented. The aim is to create a reactive obstacle avoidance method for mobile agent operating in dynamic, unstructured, and unpredictable environment. The method is inspired by the stimulus-response behavior of simple animals. An obstacle avoidance controller is developed that uses raw visual information of the environment. It employs reinforcement learning and is therefore capable of self-developing. This should result with obstacle avoidance behavior that is adaptable and therefore generalizes on various operational modalities. The general assumptions of the agent capabilities, the features of the environment as well as the initial result of the simulation are presented. The plans for improvement and suitable performance evaluation are suggested.
A considerable amount of research in the field of modern robotics deals with mobile agents and their autonomous operation in unstructured, dynamic, and unpredictable environments. Designing robust controllers that map sensory input to action in order to avoid obstacles remains a challenging task. Several biological concepts are amenable to autonomous navigation and reactive obstacle avoidance.
We present an overview of most noteworthy, elaborated, and interesting biologically-inspired approaches for solving the obstacle avoidance problem. We categorize these approaches into three groups: nature inspired optimization, reinforcement learning, and biorobotics. We emphasize the advantages and highlight potential drawbacks of each approach. We also identify the benefits of using biological principles in artificial intelligence in various research areas.
eHMIS is a Ugandan Hospital Information System (HIS), which targets the Sub-Saharan market. In its first version all forms were programmed statically and adaptations were done by code modifications. In 2014 the development of a second version of eHMIS based on Java started.
This work aims at introducing dynamic forms to this new version. While forms that are significantly important to the workflow of the application will remain static, others are replaced by forms that are dynamically designed by the user. By that, the application will become more flexible and local and situational tailoring will be possible without inducing extra costs.
In this thesis the design, implementation and testing of dynamic forms in eHMIS is discussed. The architecture is based on the questionnaire resource of FHIR®. The module enables the user to create questions and group them into sections and questionnaires. For each question the type of answer expected and other constraints can be defined. A user interface covering all functions was designed, so that no programming skills are required. In a first step dynamic forms were integrated in the application's workflow for recording symptoms, though other fields of application are possible. For testing, a usability experiment was conducted in Tororo Hospital in Eastern Uganda, using the thinking aloud method. Results were analysed and evaluated to detect usability problems and gain a general impression of user satisfaction.
Every year, hundreds of thousands of patients are affected by treatment failure or adverse drug reactions, many of which could be revented by pharmacogenomic testing. To address these deficiencies in care, clinics require
automated clinical decision support through computer based systems, which provide clinicians with patient-specific ecommendations. The primary knowledge needed for clinical pharmacogneomics is currently being
developed through textual and unstructured guidelines.
In this thesis, it is evaluated whether a web service can annotate clinically relevant genetic variants with guideline information using web services and identify areas of challenge. The proposed tool displays a formal representation of pharmacogenomic guideline information through a web service and existing resources. It enables the annotation of variant call format (VCF) files with clinical guideline information from the Pharmacogenomic Knowledge Base (PharmGKB) and Clinical Pharmacogenetics Implementation Consortium (CPIC).
The applicability of the web service to nnotate clinically relevant variants with pharmacogenomics guideline information is evaluated by translating five guidelines to a web service workflow and executing the process to annotate publically available genomes. The workflow finds genetic variants covered in CPIC guidelines and influenced drugs.
The results show that the web service could be used to annotate in real time clinically relevant variants with up-to-date pharmacogenomics guideline information, although several challenges such as translating variants into star allele nomenclature and the absence of a unique haplotype nomenclature
remain before the clinical implementation of this approach and the use on other drugs.
There are many drug interactions and to know every single interaction is impossible. In Uganda, a country located in East Africa, patients often do not get a patient information leflaet when a physician prescribes drugs because they only get the drugs without packaging and information inside. Even in developed countries many poeple die because of drug interactions.
This work aims at developing a clinical decision support system for different kinds of drug interactions: 1) drug-drug interaction,
2) drug-food interactions,
3) drug-condition
interactions and
4) drug-disease interactions.
This system must be integrated into an
existing hospital information system called electronic Health Management Information System (eHMIS).
In the first part of this thesis different kinds of clinical decision support systems are described to find out which one is the best for eHMIS. The two different types are knowledge-based and non knowledge-based systems. The second part of this thesis, the data base of eHMIS is extended to have a full
knowledge base for the new module which contains drug-drug interactions, drug-food interactions, drug-disease interactions as well as drug-condition interactions. Therefore new tables were created and filled with data of several data bases with drug interactions. The last part is about designing the clinical decision support system for drug interactions
with the knowledge base of eHMIS, including the implementation considering the integration into the existing system. To know how health professionals in Uganda work
with an electronical health system as well as their other work ows was important. The system now runs in a hospital in Kampala, the capital of Uganda and in a health center level three in Mifumi, a village located in the east of the country.
Background: Stroke rehabilitation is a complex process that requires collaboration between stroke patients
and various health professionals. One important component of the rehabilitation is to set goals collaboratively with health professionals. The goal setting process can be time-consuming. In many cases, it is complicated for the patient and difficult to track for the health professionals. A simple user interface that supports patients, their family members and health professionals can help both sides to make the goal setting and attainment process easier.
Objectives: The aim is to design and develop a software for the goal attainment process of stroke patients with milder disabilities that facilitates goal setting process and the traceability of the goal progress for patients and health professionals.
Methods: Based on previous evaluated results, the web interface was developed and improved. Using this knowledge, a goal setting interface was added. To analyze the the goal setting process, goal attainment scaling (GAS) was included as well as parts of the International Classification of Functioning, Disability and Health (ICF) core set for stroke. The results were discussed afterwards in focus groups and evaluated based on two stroke patients, one family member and health professionals.
Results: We developed an interactive prototype, that can aid the rehabilitation at home by inserting
problems with ICF codes and different kinds of goals, creating new activities and tracing goal progress by reviewing the different goals. With the help of the GAS the outcome of the patient’s goals are visualized by a line chart presenting the positive or negative outcomes of the stroke rehabilitation.
Conclusion: The interactive prototype showed that it can support stroke patients during their rehabilitation
at home. A usability test indicated that the goal setting and attainment process was perceived as useful for patients and their family members. Small improvements have to be made to simplify use and error handling. For health professionals, the prototype could also simplify the documentation process by using ICF in the prototype, and also improving collaboration when using the tool for coordination.
An architectural concept for implementing the socio-technical workflow of Digital Pathology in Chile
(2014)
Virtual Microscopy opens up the possibility to remotely access high quality images at large scales for scientific research, education, and clinical application. For clinical diagnostics, Digital Pathology (DP) presents a novel opportunity to reduce variability [Bauer et al., 2013] due to the reproducible access to Whole Slide Imaging, quantitative parameters (e.g. HER2 stained membrane) [Al-Janabi et al., 2012], second opinion and Quality Assurance [Ho et al., 2013]. Despite of the mentioned advantages, the challenge remains to incorporate DP into the pathologists workflow within a heterogeneous environment of systems and infrastructures [Stathonikos
et al., 2013]. Different issues must be solved in order to optimize the impact of DP in the daily clinical practice [Daniel et al., 2012] [Ho et al., 2006]. The integration needs precise planning and comprehensive evaluation for adopting this technology
[Stathonikos et al., 2013]. This thesis will focus on an organizational development approach based on a Socio-Technical System (STS). The socio-technical approach covers: (i) the technical issue: tissue-scanner, NDP.view, NDP.serve, analysis software, and (ii) the social issue: pathologists, technicians. In order to improve the integration, a joint optimization (of i and ii) is necessary. The developed STS approach will optimize the integration of DP towards improved workflows in clinical environments. The improved workflows will reduce the pathologists turnaround time, improve the certainty of the diagnostics, and provide a more effective patient care within the covered institutions. An overt multi-site Participatory Observation, Questionnaires, and Business Process Modelling Notation will be used to analyse the existing pathological workflows. Based on this, the system will be modelled with the 3lgm2 Toolkit [Winter et al., 2007] under consideration of various technical subsystems that are present in the clinical environment. Afterwards, the interfaces between subsystems and its possible interoperabilities will be evaluated, taking into account the different existing standards and guidelines for image processing and management, as well as business processes in DP. In order to analyse the existing preconditions a questionnaire will be evaluated to establish a robust and valid view. In addition, the overt participatory observation will support this elevation, giving a deeper insight on the social part. This observation also covers the technical side including the whole pathological process. The socio technical model will then reveal measurable potential for optimization with incorporated DP (e.g. higher throughput for slides). The organizational development approach consists of a Socio-Technical System based on overt multi-site participatory observations, questionnaires, business process modelling and 3LGM2, will optimize the use of Digital Pathology in the daily clinical practice and raise the acceptance to incorporate integrate the new technology within the dayly workflow through the user centred process of incorporation.
• Perform and evaluate a questionnaire and a participant observation of pathologists work days in private & public institutions
• Create and evaluate a 3lgm2 model
• Model the current pathological process (viewpoint of pathologist & technical assistant) & perform and evaluate a contextual inquiry to elevate the pathologists requirements & expectations towards the system
• Test the future WF according the model parameters.
This project will detect unsuspected interrelations and interdependencies within the socio- technical workflow with a pathology laboratory. The observation will reveal the action conformity as well as the environment in which the process has to be embedded. Furthermore it will establish an optimized workflow for a specific clinical environment to prepare the implementation of DP. Additionally it will be possible to
quantify digitized images in order to improve decision making and lastly to improve patient care. In the future it will be possible to extend automated image analysis in order to support clinical decision support. Depending on acceptance, this can lead towards an automated clinical decision support for cases with low complexity.
The aim of this master’s thesis is the design and implementation of a dedicated software system, for planning and implementation of occupational therapy intervention and research studies, in a driving simulator environment. In the first part, the concept based on user requirements is presented. It consists of architectural patterns and guidelines with the main focus on utility and application security. The result of this part is the design of a web application which supports integration in a clinical as well as a research environment. The second part presents the reference implementation of the previously introduced concept. It was developed under a case study in a research facility which hosts a driving simulator. A close cooperation and the influence the researcher’s experience led into a product which provides advanced usability for the target users. In conclusion, the thesis validated the concept indirectly under a testing phase of the reference implementation. It provides the base for a follow-up project to refine the software product and extend the concept to different fields of application.
The Greifswald University Hospital in Germany conducts a research project called "Greifswald Approach to Individualized Medicine (GANI_MED)", which aims at improving patient care through personalized medicine. As a result of this project, there are multiple regional patient cohorts set up for different common diseases. The collected data of these cohorts will act as a resource for epidemiological research. Researchers are going to get the possibility to use this data for their study, by utilizing a variety of different descriptive metadata attributes. The actual medical datasets of the patients are integrated from multiple clinical information systems and medical devices. Yet, at this point in the process of defining a research query, researchers do not have proper tools to query for existing patient data. There are no tools available which offer a metadata catalogue that is linked to observational data, which would allow convenient research. Instead, researchers have to issue an application for selected variables that fit the conditions of their study, and wait for the results. That leaves the researchers not knowing in advance, whether there are enough (or any) patients fitting the specified inclusion and exclusion criteria. The "Informatics for Integrating Biology and the Bedside (i2b2)" framework has been assessed and implemented as a prototypical evaluation instance for solving this issue. i2b2 will be set up at the Institute for Community Medicine (ICM) at Greifswald, in order to act as a preliminary query tool for researchers. As a result, the development of a research data import routine and customizations of the i2b2 webclient were successfully performed. An important part of the solution is, that the metadata import can adapt to changes in the metadata. New metadata items can be added without changing the import program. The results of this work are discussed and a further outlook is described in this thesis.
Ambulant studies are dependent on the behavior and compliance of subjects in their home environment. Especially during interventions on the musculoskeletal system, monitoring physical activity is essential, even for research on nutritional, metabolic, or neuromuscular issues. To support an ambulant study at the German Aerospace Center (DLR), a pattern recognition system for human activity was developed. Everyday activi-ties of static (standing, sitting, lying) and dynamic nature (walking, ascending stairs, descending stairs, jogging) were under consideration. Two tri-axial accelerometers were attached to the hip and parallel to the tibia. Pattern characterizing features from the time domain (mean, standard deviation, absolute maximum) and the frequency domain (main frequencies, spectral entropy, autoregressive coefficients, signal magni-tude area) were extracted. Artificial neural networks (ANN) with a feedforward topology were trained with backpropagation as supervised learning algorithm. An evaluation of the resulting classifier was conducted with 14 subjects completing an activity protocol and a free chosen course of activities. An individual ANN was trained for each subject. Accuracies of 87,99 % and 71,23 % were approached in classifying the activity protocol and the free run, respectively. Reliabilities of 96,49 % and 76,77 % were measured. These performance parameters represent a working ambulant physical activity monitor-ing system.
Aside from hardware, a major component of a Brain Computer Interface is the software that provides the tools for translating raw acquired brain signals into commands to control an application or a device. There’s a range of software, some proprietary, like MATLAB and some free and open source (FOSS), accessible under the GNU General Public License (GNU GPL). OpenViBE is one such freely accessible software. This thesis carries out a functionality and usability test of the platform, looking at its portability, architecture and communication protocols. To investigate the feasibility of reproducing the P300 xDAWN speller BCI presented by OpenViBE, users focused on a character on a 6x6 alphanumeric grid which contained a sequence of random flashes of the rows and columns. Visual stimulus is presented to a user every time the character they are focusing on is highlighted in a row or column. A TMSi analog-to-digital converter was used together with a 32-channel active electrode cap (actiCAP) to record user’s Electroencephalogram (EEG) which was then used in an offline session to train the spatial filter algorithm, and the classifier to identify the P300 evoked potentials, elicited as a user’s reaction to an external stimulus. In an online session, the users tried to spell with the application using the power of their brain signal. Aspects of evoked potentials (EP), both auditory (AEP) and visual (VEP) are further investigated as a validation of results of the P300 speller.
Clinical diagnosis ideally relies on quantitative measures of disease. For a number of diseases, diagnostic guidelines require or at least recommend neuroimaging exams to support the clinical findings. As such, there is also an increasing interest to derive quantitative results from magnetic resonance imaging (MRI) examinations, i.e. images providing quantitative T1, T2, T2* tissue parameters. Quantitative MRI protocols, however, often require prohibitive long acquisition times (> 10 minutes), nor standards have been established to regulate and control MRI-based quantification. This work aims at exploring the technical feasibility to accelerate existing MRI acquisition schemes to enable a -3 minutes clinical imaging protocol of quantitative tissue parameters such as T2 and T2* and at identifying technical factors that are key elements to obtain accurate results. In the first part of this thesis, the signal model of an existing quantitative T2-mapping algorithm is expanded to explore the methodology for a broader use including the application to T2* and its use in the presence of imperfect imaging conditions and system related limitations of the acquisition process. The second part of this thesis is dedicated to optimize the iterative mapping algorithm for a robust clinical application including the integration on a clinical MR platform. This translation of technology is a major step to enable and validate such new methodology in a realistic clinical environment. The robustness and accuracy of the developed and implemented model is investigated by comparing with the "gold standard" information from fully sampled phantom and in-vivo MRI data.
Quantitative assessment of Positron Emission Tomography (PET) imaging can be used for diagnosis and staging of tumors and monitoring of response in cancer treatment. In clinical practice, PET analysis is based on normalized indices such as those based on the Standardized Uptake Value (SUV). Although largely evaluated, these indices are considered quite unstable mainly because of the simplicity of their experimental protocol. Development and validation of more sophisticated methods for the purposes of clinical research require a common open platform that can be used both for prototyping and sharing of the analysis methods, and for their evaluation by clinical users. This work was motivated by the lack of such platform for longitudinal quantitative PET analysis. By following a prototype driven software development approach, an open source tool for quantitative analysis of tumor changes based on multi-study PET image data has been implemented. As a platform for this work, 3D Slicer 4, a free open source software application for medical image computing has been chosen. For the analysis and quantification of PET data, the implemented software tool guides the user through a series of workflow steps. In addition to the implementation of a guided workflow, the software was made extensible by integration of interfaces for the enhancement of segmentation and PET quantification algorithms. By offering extensibility, the PET analysis software tool was transformed into a platform suitable for prototyping and development of PET-specific segmentation and quantification methods. The accuracy, efficiency and usability of the platform were evaluated in reproducibility and usability studies. The results achieved in these studies demonstrate that the implemented longitudinal PET analysis software tool fulfills all requirements for the basic quantification of tumors in PET imaging and at the same time provides an efficient and easy to use workflow. Furthermore, it can function as a platform for prototyping of PET-specific segmentation and quantification methods, which in the future can be incorporated in the workflow.