IDEAS home Printed from https://ideas.repec.org/a/rdc/journl/v10y2019i4p17-29.html
   My bibliography  Save this article

Using the information and communications technology data deluge from a semantic perspective of a dynamic challenge: What to learn and what to ignore? -Part 2-

Author

Listed:
  • GREU, Victor

Abstract

The paper analyses the influences of the Data Deluge (DD), as huge flows of data leveraged or created, in all activity field, by the complex proliferation and exponential development of Information and Communications Technology (ICT), as main driving factor of the progress of the Information society (IS) toward Knowledge Based Society (KBS). As a consequence, the paper further approached the dynamic processes where the ICT generates the data, by the complex symbiosis of ICT with humans. In a systemic approach, the paper shortly analyses the revolutionary impact of DD on science, but also the way the most of DD is generated (including the challenge of the technologic progress necessary for accomplish this task), resulting that the DD has leveraged a new reality, called eScience, considered the fourth paradigm, after experiment, theory and simulations, where scientists were no longer interacting directly with the phenomena, which not only open the exploration of (inaccessible) new fields, but provide the base of data-intensive science - one of the mechanisms where the World spiral development is produced using the multiplication force of ICT/DD. The concrete features of this new era (visible at CERN, climate change forecasting or critical national infrastructures), represent fundamental trends in solving the most critical challenges of DD and include Big data analytics, real time processing/storage on site and the integration of ICT advances like Internet of Things (IoT), artificial intelligence (AI), Cloud, Edge, Fog etc. Among these trends, the new phase of AI is now the ICT most important hype, remarkable by machine learning (ML), deep learning (DL) and further the emerging cognitive computing, but their overall results are also highly dependent on human intelligence (HI) for delivering the optimal knowledge refining. The long road from data to knowledge or decisions has many steps, depending on (the amount of) data, but also on specific field/algorithm and humans, as the highly performant applications need usually (for training) sample data with specific content (not only simple/unstructured … more data), which often ask human supervision, although AI/ML not only tend to get closer to HI, but for specific task to replace humans. Another side of DD challenges is about the most complex and complicate issues of extracting and especially evaluating knowledge gain in the diverse processes involving the design and use of DD/AI/ML at Earth scale. This is about how to manage DD evolution, in each main field of applications, in order to keep them efficacious and efficient. A preliminary conclusion is that such approach could lead to partial reasonable solutions, but it eludes just the most important and difficult issue, from our point of view, i.e. the global effects (influences) of ICT/DD exponential evolution at Earth ecosystem scale. Even mathematically, the subsystems could be optimized by some criteria, but at system level not only it could appear a suboptimal solution, but we could expect to considerable vulnerabilities and risks. Besides the huge benefits, the exponential ICT development (without excluding other connected technologies) could generate at Earth scale less desired consequences like carbon footprint, human dependences, wastes pollution etc., in the general World context of climate changes, resources fading, clean environment, social unbalances and so on. The most relevant case is that DD leverages many applications for climate changes forecasts and this is a benefic influence, but here it could be a vicious circle, because these applications represent only a little part of DD sphere and many others could be less benefic or knowledge providing, like streaming to much video content for entertainment or games, while all applications from sphere contribute to carbon footprint. The analysis should be extended this way to all applications, i.e. it is necessary to evaluate them all, counting, case by case, at two levels: first at local level, the specific benefits and challenges and second, at global level, the complex connections by which they could be added or influence others in superposition (less benefic) processes or consequences. As a consequence, the most complicate and difficult problem of DD/ICT exponential evolutions management is how to obtain, using both DD/ML/AI/ICT and HI resources, that refined knowledge which could provide optimal solutions to every relevant phase of those evolutions, i.e. how to save the Earth and humankind from irreversible consequences using the most powerful tools of science and technology (HI/AI). More than these, the difficulty of this global problem is increasing with the level of complexity and with the speed of changes DD/ICT generate to IS/KBS. In this global problem, the engineers and other specialists involved in developing ICT systems, products and services have a prominent role (unfortunately not always decisive one). In fact, the fundamental problem of optimally refining the knowledge often starts at the technical designers, although the opportunity of developing big projects should involve many levels and different areas specialists, in order to get a multicriterial optimization at global scale. As opportunity means also what knowledge must be refined, this problem is similar with the (university) conscious professor’s dilemma, we all experience (desirably) every year at the beginning of school when we decide what to erase and what to introduce in our course, and generally when a course, book or knowledge become obsolete. The final conclusion is that in our ever-changing DD days, it is almost impossible to precisely know when and how knowledge must be refined (a relative and approximative decision), but this does not mean that humankind have to ignore these challenges, on the contrary, they have to continuously look for getting as much as possible close to optimum, by the available updated data/information and resourses, using AI/HI with desirable wisdom solutions.

Suggested Citation

  • GREU, Victor, 2019. "Using the information and communications technology data deluge from a semantic perspective of a dynamic challenge: What to learn and what to ignore? -Part 2-," Romanian Distribution Committee Magazine, Romanian Distribution Committee, vol. 10(4), pages 17-29, December.
  • Handle: RePEc:rdc:journl:v:10:y:2019:i:4:p:17-29
    as

    Download full text from publisher

    File URL: http://crd-aida.ro/RePEc/rdc/v10i4/2.pdf
    Download Restriction: no

    More about this item

    Keywords

    Data Deluge; Big Data; machine learning; artificial intelligence; learning algorithms; human intelligence; Internet of Things; eScience; data-intensive science; computer algorithms; computing infrastructure; climate changes;

    JEL classification:

    • L63 - Industrial Organization - - Industry Studies: Manufacturing - - - Microelectronics; Computers; Communications Equipment
    • L86 - Industrial Organization - - Industry Studies: Services - - - Information and Internet Services; Computer Software
    • M15 - Business Administration and Business Economics; Marketing; Accounting; Personnel Economics - - Business Administration - - - IT Management
    • O31 - Economic Development, Innovation, Technological Change, and Growth - - Innovation; Research and Development; Technological Change; Intellectual Property Rights - - - Innovation and Invention: Processes and Incentives
    • O33 - Economic Development, Innovation, Technological Change, and Growth - - Innovation; Research and Development; Technological Change; Intellectual Property Rights - - - Technological Change: Choices and Consequences; Diffusion Processes

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:rdc:journl:v:10:y:2019:i:4:p:17-29. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Theodor Valentin Purcarea) The email address of this maintainer does not seem to be valid anymore. Please ask Theodor Valentin Purcarea to update the entry or send us the correct email address. General contact details of provider: http://www.distribution-magazine.eu .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.