Search results:
Found 7
Listing 1 - 7 of 7 |
Sort by
|
Choose an application
If numeric data from the Web are brought together, natural scientists can compare climate measurements with estimations, financial analysts can evaluate companies based on balance sheets and daily stock market values, and citizens can explore the GDP per capita from several data sources. However, heterogeneities and size of data remain a problem. This work presents methods to query a uniform view - the Global Cube - of available datasets from the Web and builds on Linked Data query approaches.
Choose an application
Integration of enterprise applications (EAI) and e-business integration are time-consuming and expensive. This thesis proposes pattern mining to determine identical object classes. Processes are integrated based on declared integration goals and known software behavior. A model-driven approach ensures consistent use of behavioral knowledge from development in integration. The contributions were applied to the CCTS Modeler Warp 10 and SAP NetWeaver CE (composition environment) developed at SAP.
Choose an application
The articles in this Research Topic provide a state-of-the-art overview of the current progress in integrating computational and empirical research on visual object recognition. Developments in this exciting multidisciplinary field have recently gained momentum: High performance computing enabled breakthroughs in computer vision and computational neuroscience. In parallel, innovative machine learning applications have recently become available for datamining the large-scale, high resolution brain data acquired with (ultra-high field) fMRI and dense multi-unit recordings. Finally, new techniques to integrate such rich simulated and empirical datasets for direct model testing could aid the development of a comprehensive brain model. We hope that this Research Topic contributes to these encouraging advances and inspires future research avenues in computational and empirical neuroscience.
object recognition --- computational neuroscience --- Computer Vision --- fMRI --- Neurophysiology --- Feature representation --- ventral visual pathway --- invariance --- neural networks --- multimodal data integration
Choose an application
The history of livestock started with the domestication of their wild ancestors: a restricted number of species allowed to be tamed and entered a symbiotic relationship with humans. In exchange for food, shelter and protection, they provided us with meat, eggs, hides, wool and draught power, thus contributing considerably to our economic and cultural development. Depending on the species, domestication took place in different areas and periods. After domestication, livestock spread over all inhabited regions of the earth, accompanying human migrations and becoming also trade objects. This required an adaptation to different climates and varying styles of husbandry and resulted in an enormous phenotypic diversity. Approximately 200 years ago, the situation started to change with the rise of the concept of breed. Animals were selected for the same visible characteristics, and crossing with different phenotypes was reduced. This resulted in the formation of different breeds, mostly genetically isolated from other populations. A few decades ago, selection pressure was increased again with intensive production focusing on a limited range of types and a subsequent loss of genetic diversity. For short-term economic reasons, farmers have abandoned traditional breeds. As a consequence, during the 20th century, at least 28% of farm animal breeds became extinct, rare or endangered. The situation is alarming in developing countries, where native breeds adapted to local environments and diseases are being replaced by industrial breeds. In the most marginal areas, farm animals are considered to be essential for viable land use and, in the developing world, a major pathway out of poverty. Historic documentation from the period before the breed formation is scarce. Thus, reconstruction of the history of livestock populations depends on archaeological, archeo-zoological and DNA analysis of extant populations. Scientific research into genetic diversity takes advantage of the rapid advances in molecular genetics. Studies of mitochondrial DNA, microsatellite DNA profiling and Y-chromosomes have revealed details on the process of domestication, on the diversity retained by breeds and on relationships between breeds. However, we only see a small part of the genetic information and the advent of new technologies is most timely in order to answer many essential questions. High-throughput single-nucleotide polymorphism genotyping is about to be available for all major farm animal species. The recent development of sequencing techniques calls for new methods of data management and analysis and for new ideas for the extraction of information. To make sense of this information in practical conditions, integration of geo-environmental and socio-economic data are key elements. The study and management of farm animal genomic resources (FAnGR) is indeed a major multidisciplinary issue. The goal of the present Research Topic was to collect contributions of high scientific quality relevant to biodiversity management, and applying new methods to either new genomic and bioinformatics approaches for characterization of FAnGR, to the development of FAnGR conservation methods applied ex-situ and in-situ, to socio-economic aspects of FAnGR conservation, to transfer of lessons between wildlife and livestock biodiversity conservation, and to the contribution of FAnGR to a transition in agriculture (FAnGR and agro-ecology).
Choose an application
A “genotype"" is essentially an organism's full hereditary information which is obtained from its parents. A ""phenotype"" is an organism's actual observed physical and behavioral properties. These may include traits such as morphology, size, height, eye color, metabolism, etc. One of the pressing challenges in computational and systems biology is genotype-to-phenotype prediction. This is challenging given the amount of data generated by modern Omics technologies. This “Big Data” is so large and complex that traditional data processing applications are not up to the task. Challenges arise in collection, analysis, mining, sharing, transfer, visualization, archiving, and integration of these data. In this Special Issue, there is a focus on the systems-level analysis of Omics data, recent developments in gene ontology annotation, and advances in biological pathways and network biology. The integration of Omics data with clinical and biomedical data using machine learning is explored. This Special Issue covers new methodologies in the context of gene–environment interactions, tissue-specific gene expression, and how external factors or host genetics impact the microbiome.
tissue-specific expressed genes --- transcriptome --- tissue classification --- support vector machine --- feature selection --- bioinformatics pipelines --- algorithm development for network integration --- miRNA–gene expression networks --- multiomics integration --- network topology analysis --- candidate genes --- gene–environment interactions --- logic forest --- systemic lupus erythematosus --- Gene Ontology --- KEGG pathways --- enrichment analysis --- proteomic analysis --- plot visualization --- Alzheimer’s disease --- dementia --- cognitive impairment --- neurodegeneration --- Gene Ontology --- annotation --- biocuration --- amyloid-beta --- microtubule-associated protein tau --- artificial intelligence --- genotype --- phenotype --- deep phenotype --- data integration --- genomics --- phenomics --- precision medicine informatics --- epigenetics --- chromatin modification --- sequencing --- regulatory genomics --- disease variants --- machine learning --- multi-omics --- data integration --- curse of dimensionality --- heterogeneous data --- missing data --- class imbalance --- scalability --- genomics --- pharmacogenomics --- cell lines --- database --- drug sensitivity --- data integration --- omics data --- genomics --- RNA expression --- non-omics data --- clinical data --- epidemiological data --- challenges --- integrative analytics --- joint modeling --- multivariate analysis --- multivariate causal mediation --- distance correlation --- direct effect --- indirect effect --- causal inference --- n/a
Choose an application
This open access book is part of the LAMBDA Project (Learning, Applying, Multiplying Big Data Analytics), funded by the European Union, GA No. 809965. Data Analytics involves applying algorithmic processes to derive insights. Nowadays it is used in many industries to allow organizations and companies to make better decisions as well as to verify or disprove existing theories or models. The term data analytics is often used interchangeably with intelligence, statistics, reasoning, data mining, knowledge discovery, and others. The goal of this book is to introduce some of the definitions, methods, tools, frameworks, and solutions for big data processing, starting from the process of information extraction and knowledge representation, via knowledge processing and analytics to visualization, sense-making, and practical applications. Each chapter in this book addresses some pertinent aspect of the data processing chain, with a specific focus on understanding Enterprise Knowledge Graphs, Semantic Big Data Architectures, and Smart Data Analytics solutions. This book is addressed to graduate students from technical disciplines, to professional audiences following continuous education short courses, and to researchers from diverse areas following self-study courses. Basic skills in computer science, mathematics, and statistics are required.
Database Management --- Information Systems Applications (incl. Internet) --- Logic in AI --- Computer Appl. in Administrative Data Processing --- Business Information Systems --- Computer and Information Systems Applications --- Computer Application in Administrative Data Processing --- artificial intelligence --- big data --- data analytics --- data handling --- data integration --- data mining --- databases --- digital storage --- domain knowledge --- graph theory --- information management --- information technology --- integrated data --- internet --- knowledge management --- knowledge-based system --- ontologies --- semantics --- Databases --- Database programming --- Information retrieval --- Internet searching --- Artificial intelligence --- Public administration --- Information technology: general issues --- Business mathematics & systems
Choose an application
Advances in the knowledge of the tangible components (position, size, shape) and intangible components (identity, habits) of an historic building or site involves fundamental and complex tasks in any project related to the conservation of cultural heritage (CH). In recent years, new geotechnologies have proven their usefulness and added value to the field of cultural heritage (CH) in the tasks of recording, modeling, conserving, and visualizing. In addition, current developments in building information modeling (HBIM), allow integration and simulation of different sources of information, generating a digital twin of any complex CH construction. As a result, experts in the area have increased the number of available sensors and methodologies. However, the quick evolution of geospatial technologies makes it necessary to revise their use, integration, and application in CH. This process is difficult to adopt, due to the new options which are opened for the study, analysis, management, and valorization of CH. Therefore, the aim of the present Special Issue is to cover the latest relevant topics, trends, and best practices in geospatial technologies and processing methodologies for CH sites and scenarios as well as to introduce the new tendencies. This book originates from the Special Issue “Data Acquisition and Processing in Cultural Heritage”, focusing primarily on data and sensor integration for CH; documentation/restoration in CH; heritage 3D documentation and modeling of complex CH sites; drone inspections in CH; software development in CH; and augmented reality in CH. It is hoped that this book will provide the advice and guidance required for any CH professional, making the best possible use of these sensors and methods in CH.
frescoed vault --- planar representation of vault --- cultural heritage documentation --- cultural heritage --- virtual restoration --- spatial geometric features --- skeleton line --- regression analysis --- Dazu Thousand-Hand Bodhisattva statue --- community --- heritage resources management --- open-source software --- web-based GIS --- Arches-HIP --- augmented reality --- navigation --- instant tracking --- 3D reconstruction --- cultural heritage --- computer graphics --- CATCHA --- camera tracking --- cultural heritage --- real-time --- point clouds --- heritage --- photogrammetry --- laser scanning --- tropical --- vernacular --- multi-scale --- multi-sensor --- terrestrial laser scanning --- unmanned aerial vehicle photogrammetry --- integrated three-dimensional modeling --- digital documentation --- cultural heritage site --- 4D modeling --- cultural heritage --- data fusion --- monitoring --- visualization --- out of plumb --- close range photogrammetry --- cultural heritage preservation --- cylinder fitting --- point cloud --- lost heritage --- drones --- photogrammetry --- thermal imaging --- thermal dynamics --- dry-stone walls --- terraces --- heritage --- landscape --- geomatics --- UNESCO --- 3D models --- multisensor --- multiscale --- SLAM --- MMS --- LiDAR --- UAV --- data integration --- data fusion --- cultural heritage --- cultural heritage --- replica --- photogrammetry --- laser-scanning --- 3D printing
Listing 1 - 7 of 7 |
Sort by
|