Accéder au contenu principal

Investigation


 Guiding Questions

The investigation phase is a crucial step in the challenge-based learning process, as it sets the foundation for the rest of the project. During this phase, our team has been focused on generating questions related to the challenge and gathering information that will help us to better understand it. Some of the questions we might consider include:

  • Who is the target audience for our tool?
  • Are there any limitations or restrictions in terms of video format, length, or language?
  • What specific insights can the application extract from videos?
  • Where can we find videos that will allow us to train our models?
  • What technologies are used to transcribe audio from videos into text?
  • Can we create a consistent summarization similar to human written one?

To answer these questions, we will need to do some research and gather data from a variety of sources. This included interviews with experts, literature reviews, and discussions with members of the community affected by the challenge.


What can we extract from educational videos ?

We can extract a wealth of useful insights from educational videos, empowering students with valuable information. Let's explore the key features and insights:

  • Transcription : using advanced speech recognition technology to convert the spoken content of videos into accurate and reliable text transcriptions. This enables students to have a written record of the video's content, facilitating easier review, search, and reference.

  • Sentiment Analysis : using NLP techniques to evaluate sentiment expressed in the speech. By analyzing the language used, it can determine whether the content conveys positive, negative, or neutral sentiments. This insight can help students gauge the overall sentiment of the video and identify emotionally impactful sections.
  • Question Answering: Leveraging the information extracted from the video, we can also provide accurate answers to specific questions related to the video's content. This allows students to quickly find the information they need without having to watch the entire video again.

  • Named Entity Recognition : leveraging named entity recognition algorithms to identify and extract specific entities mentioned in the video, such as names of people, organizations, locations, or other relevant entities. This allows students to quickly identify and comprehend the key entities discussed within the video.

  • Topics / Keywords Extraction: using advanced natural language processing techniques to identify and extract the main topics and keywords discussed in the video. This insight provides students with a clear understanding of the video's subject matter, allowing them to navigate through the content more effectively.

  • Summarization: generating concise summaries of the video's content, capturing the essential points and key takeaways. This feature enables students to grasp the main ideas and concepts without having to watch the entire video, saving time and enhancing their learning efficiency.

By extracting these valuable insights, we can empower students to navigate through the overwhelming content of educational videos, engage more actively, and optimize their learning experience. We can also enhance the accessibility and effectiveness of online learning, allowing students to extract knowledge efficiently and make the most of their educational resources.


Follow-Up : The Data

In order to train and develop our NLP models, we will leverage a diverse range of data sources, including video recordings from ESPRIT and YouTube. These datasets will serve as valuable resources for our NLP models to learn and extract insights from educational videos. The video recordings from ESPRIT will provide us with a rich collection of content that aligns closely with our academic project. These recordings capture lectures, presentations, and tutorials delivered by experienced educators and subject matter experts. This will help us to develop NLP models that are specifically tailored to the educational context of ESPRIT, enabling us to extract relevant information and insights that cater to the needs of our target audience. In addition to the ESPRIT dataset, we will also incorporate video recordings from YouTube as it hosts an immense variety of educational content spanning various subjects and domains. By incorporating YouTube data into our training pipeline, we can expose our models to a diverse range of educational videos, allowing them to learn from a broader spectrum of teaching styles, topics, and delivery formats. By leveraging both ESPRIT and YouTube data, we aim to create NLP models that are robust, adaptable, and capable of handling the complexities and nuances of different educational video sources. The combination of these datasets will enable our models to learn from a vast pool of information, enhancing their ability to extract valuable insights, perform accurate transcription, sentiment analysis, named entity recognition, topic extraction, and summarization.


Technical Investigation 

It was also very important for us to thoroughly research the technical challenges that we may face in order to identify potential solutions and ensure that our project is feasible. Reading research papers can be a valuable way to gather information and gain insights from experts in the field. The following research papers helped us further understand the technical aspects of our project.


Follow-Up : Speech to Text

We have a diverse range of text-to-speech models at our disposal to accurately transcribe the audio content from the videos. Below is a comprehensive list of some of the state-of-the-art text-to-speech models that we can utilize for transcription:



You find the full list in the link below : 

📁 Resources: Speech to Text Models


Follow-Up : Text Summarization

In our quest for text summarization, we have the advantage of accessing and fine-tuning multiple powerful models. Among the notable models that we can employ for this purpose are Pegasus and T5. These models have gained recognition for their exceptional performance in generating concise and informative summaries from large bodies of text.

  • Pegasus, a transformer-based model, has demonstrated remarkable proficiency in abstractive summarization. It leverages pre-training techniques that allow it to comprehend the context of the input text and generate high-quality summaries that capture the key essence of the source material.
  • T5, which stands for Text-to-Text Transfer Transformer, is a versatile model that can be applied to various natural language processing tasks, including summarization. Through fine-tuning, T5 can be specifically tailored to excel in summarization tasks, producing coherent and informative summaries that condense the essential information from the original text.


Follow-Up : Sentiment Analysis 

There are several methods to conduct sentiment analysis, each with its strengths and weaknesses. Here is overview of sentiment analysis methods :



To conduct sentiment analysis, fine-tuning BERT has proven to be an effective and widely adopted approach. BERT (Bidirectional Encoder Representations from Transformers) is a state-of-the-art language model that excels at understanding the contextual meaning of words and sentences. By fine-tuning BERT on a sentiment analysis dataset, we can leverage its pre-trained knowledge and adapt it to the specific task of sentiment classification. Fine-tuning BERT for sentiment analysis offers several advantages. Firstly, it allows the model to capture the nuances and context-dependent nature of sentiment, as BERT inherently considers the relationship between words and their surroundings. Additionally, fine-tuning enables the model to adapt to specific domains or languages, making it applicable to a wide range of sentiment analysis tasks. Furthermore, fine-tuning BERT is advantageous when working with limited labeled data, as it can transfer its knowledge from pre-training to the sentiment analysis task, reducing the need for large annotated datasets. This makes it a practical choice for sentiment analysis applications in domains where labeled data may be scarce.


Follow-Up : Question Answering

Within the realm of question answering, we encounter various approaches that can be classified into different types, namely extractive and generative question answering. Extractive question answering involves selecting the most relevant answer from a given passage, while generative question answering focuses on generating a coherent response based on the context and understanding of the question. As we proceed with our project, we have the flexibility to explore and evaluate multiple models that are well-suited for question answering. These models can be leveraged and fine-tuned to meet our specific requirements. One such noteworthy model, particularly suitable for our predominantly French-based dataset, is CamemBERT.



  • CamemBERT, built upon the powerful transformer architecture, has proven its capabilities in comprehending and responding to questions within the context of French text. By incorporating CamemBERT into our project, we can tap into its advanced language understanding capabilities and adapt it to our dataset through fine-tuning. This allows us to harness the full potential of CamemBERT and provide accurate and informative answers.

📁 Resources: CamemBERT


Follow-Up : Named-Entity Recognition

We have a diverse range of NER models at our disposal to accurately identify identities from the transcribed text. Below is a comprehensive list of some of the state-of-the-art models / libraries that we can utilize:



📁 Resources: NER 


Follow-Up : Keywords Extraction

Our investigation into optimizing the process of keyword extraction has led us to uncover several noteworthy approaches. We have identified the following methods: KeyBERT, YAKE, RAKE, and fine-tuning CAMEMBERT with a custom dataset. These methodologies exhibit tremendous potential in extracting keywords from various types of textual data.

  • KeyBERT stands out as a powerful technique that utilizes BERT-based embeddings to generate representative keywords. By leveraging the contextual information captured by BERT, KeyBERT offers a comprehensive and nuanced approach to keyword extraction. 
  • YAKE (Yet Another Keyword Extractor) presents a noteworthy alternative, employing an unsupervised approach based on statistical measures such as term frequency and inverse document frequency (TF-IDF) to identify salient keywords.

  • RAKE (Rapid Automatic Keyword Extraction) is another notable method that relies on statistical heuristics, such as word frequency and co-occurrence, to extract keywords from text. This unsupervised technique is particularly adept at handling longer documents and can produce informative and concise keyword sets.

Additionally, fine-tuning CAMEMBERT, a variant of BERT designed specifically for French language processing, on a custom dataset shows promise for keyword extraction tasks in French text. By adapting the pre-trained CAMEMBERT model to domain-specific or task-specific data, we can enhance its ability to accurately extract relevant keywords from French language documents.



👉 Act