The Web is full of data sources that we want to manipulate on a large scale. The current approach is to represent this data in the form of a data or knowledge graph; for example open and connected data (open data), social networks, online encyclopedias. This approach is even present in the major web industries, Alphabet (in Google) and Meta (Facebook).

The advantage of knowledge graphs is to be able to query them using logical languages but also to be able to learn structural properties on them.

While knowledge graphs are very important tools for managing data on the web, not all data on the web is edited in such a model. It is then necessary to search and learn from text and other less structured content to build new graphs.

This course introduces the different main steps that a data science engineer needs to know to extract knowledge from large volumes of data.
It will familiarize you with concrete tools for:

  • Manipulating and visualizing graphs.
  • Classificating nodes and subgraphs using machine learning.
  • Reasoning in knowledge graphs, using Semantic Web technologies.
  • Finding connections between different graphs, or between texts and graphs, using semantics.
  • Mining and extracting information from textual data.

The first 6 sessions will be intended for the presentation of concepts and tools, then you will carry out projects in pairs.