IDEAS home Printed from https://ideas.repec.org/a/igg/jthi00/v1y2005i2p27-45.html
   My bibliography  Save this article

Automated Video Segmentation for Lecture Videos: A Linguistics-Based Approach

Author

Listed:
  • Ming Lin

    (University of Arizona, USA)

  • Michael Chau

    (University of Hong Kong, Hong Kong)

  • Jinwei Cao

    (University of Arizona, USA)

  • Jay F. Nunamaker Jr.

    (University of Arizona, USA)

Abstract

Video, a rich information source, is commonly used for capturing and sharing knowledge in learning systems. However, the unstructured and linear features of video introduce difficulties for end users in accessing the knowledge captured in videos. To extract the knowledge structures hidden in a lengthy, multi-topic lecture video and thus make it easily accessible, we need to first segment the video into shorter clips by topic. Because of the high cost of manual segmentation, automated segmentation is highly desired. However, current automated video segmentation methods mainly rely on scene and shot change detection, which are not suitable for lecture videos with few scene/shot changes and unclear topic boundaries. In this article we investigate a new video segmentation approach with high performance on this special type of video: lecture videos. This approach uses natural language processing techniques such as noun phrases extraction, and utilizes lexical knowledge sources such as WordNet. Multiple linguistic-based segmentation features are used, including content-based features such as noun phrases and discourse-based features such as cue phrases. Our evaluation results indicate that the noun phrases feature is salient.

Suggested Citation

  • Ming Lin & Michael Chau & Jinwei Cao & Jay F. Nunamaker Jr., 2005. "Automated Video Segmentation for Lecture Videos: A Linguistics-Based Approach," International Journal of Technology and Human Interaction (IJTHI), IGI Global, vol. 1(2), pages 27-45, April.
  • Handle: RePEc:igg:jthi00:v:1:y:2005:i:2:p:27-45
    as

    Download full text from publisher

    File URL: http://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/jthi.2005040102
    Download Restriction: no
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:igg:jthi00:v:1:y:2005:i:2:p:27-45. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Journal Editor (email available below). General contact details of provider: https://www.igi-global.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.