Search results:
Found 5
Listing 1 - 5 of 5 |
Sort by
|
Choose an application
Algorithmic identity politics reinstate old forms of social segregation—in a digital world, identity politics is pattern discrimination. It is by recognizing patterns in input data that Artificial Intelligence algorithms create bias and practice racial exclusions thereby inscribing power relations into media. How can we filter information out of data without reinserting racist, sexist, and classist beliefs?
Media Studies --- Artificial Intelligence --- Critical Algorithm Studies
Choose an application
The book provides an overview of more than a decade of joint R&D efforts in the Low Countries on HLT for Dutch. It not only presents the state of the art of HLT for Dutch in the areas covered, but, even more importantly, a description of the resources (data and tools) for Dutch that have been created are now available for both academia and industry worldwide.The contributions cover many areas of human language technology (for Dutch): corpus collection (including IPR issues) and building (in particular one corpus aiming at a collection of 500M word tokens), lexicology, anaphora resolution, a semantic network, parsing technology, speech recognition, machine translation, text (summaries) generation, web mining, information extraction, and text to speech to name the most important ones.The book also shows how a medium-sized language community (spanning two territories) can create a digital language infrastructure (resources, tools, etc.) as a basis for subsequent R&D. At the same time, it bundles contributions of almost all the HLT research groups in Flanders and the Netherlands, hence offers a view of their recent research activities.Targeted readers are mainly researchers in human language technology, in particular those focusing on Dutch. It concerns researchers active in larger networks such as the CLARIN, META-NET, FLaReNet and participating in conferences such as ACL, EACL, NAACL, COLING, RANLP, CICling, LREC, CLIN and DIR ( both in the Low Countries), InterSpeech, ASRU, ICASSP, ISCA, EUSIPCO, CLEF, TREC, etc. In addition, some chapters are interesting for human language technology policy makers and even for science policy makers in general.
Choose an application
Linked Open Data (LOD) is a pragmatic approach for realizing the Semantic Web vision of making the Web a global, distributed, semantics-based information system. This book presents an overview on the results of the research project “LOD2 -- Creating Knowledge out of Interlinked Data”. LOD2 is a large-scale integrating project co-funded by the European Commission within the FP7 Information and Communication Technologies Work Program. Commencing in September 2010, this 4-year project comprised leading Linked Open Data research groups, companies, and service providers from across 11 European countries and South Korea. The aim of this project was to advance the state-of-the-art in research and development in four key areas relevant for Linked Data, namely 1. RDF data management; 2. the extraction, creation, and enrichment of structured RDF data; 3. the interlinking and fusion of Linked Data from different sources and 4. the authoring, exploration and visualization of Linked Data.
Choose an application
Modern knowledge discovery methods enable users to discover complex patterns of various types in large information repositories. However, the underlying assumption has always been that the data to which the methods are applied to originates from one domain. The focus of this book, and the BISON project from which the contributions are originating, is a network based integration of various types of data repositories and the development of new ways to analyse and explore the resulting gigantic information networks. Instead of finding well defined global or local patterns they wanted to find domain bridging associations which are, by definition, not well defined since they will be especially interesting if they are sparse and have not been encountered before. The 32 contributions presented in this state-of-the-art volume together with a detailed introduction to the book are organized in topical sections on bisociation; representation and network creation; network analysis; exploration; and applications and evaluation.
Choose an application
This book introduces a novel approach to the design and operation of large ICT systems. It views the technical solutions and their stakeholders as complex adaptive systems and argues that traditional risk analyses cannot predict all future incidents with major impacts. To avoid unacceptable events, it is necessary to establish and operate anti-fragile ICT systems that limit the impact of all incidents, and which learn from small-impact incidents how to function increasingly well in changing environments.The book applies four design principles and one operational principle to achieve anti-fragility for different classes of incidents. It discusses how systems can achieve high availability, prevent malware epidemics, and detect anomalies. Analyses of Netflix’s media streaming solution, Norwegian telecom infrastructures, e-government platforms, and Numenta’s anomaly detection software show that cloud computing is essential to achieving anti-fragility for classes of events with negative impacts.
Computer Systems Organization and Communication Networks --- Information Systems and Communication Service --- Artificial Intelligence (incl. Robotics) --- Simulation and Modeling --- anomaly detection, anti-fragility, cloud computing, complex adaptive systems, DevOps (Development and Operation), diversity, loose coupling
Listing 1 - 5 of 5 |
Sort by
|