1970: the emergence of the concept of big data
The concept of "big data" arose in the time of
mainframes and related scientific computer calculations. Knowledge-intensive
computing has always been difficult and usually linked with the need to process
big amounts of information.
2008: The appearance of the term "Big
Data"
It is one of the few names that have a very
reliable date of the birth - September 3, 2008, when a special issue of the
oldest British scientific journal Nature was published, dedicated to finding
the answer to the question “How can technologies that open up the possibility
of working with big amounts of data influence future science?” The special
issue summed up the previous discussions about the role of data in science in
general and in electronic science (e-science) in particular. Along with the
increase in processing power and the development of storage technologies, the
possibilities of analyzing big data are gradually becoming available to small
and medium-sized businesses and cease to be exclusively the prerogative of
large companies and research centers. This contributes to the development of
the cloud computing model.
In 2012 participants in the
project "Internet and American Life" Pew Research Center's and
"Imagining the Internet" center at the University of Elon asked the
players of the digital field to evaluate two scenarios for the development of
events by 2020.
53% of respondents agreed with the following
forecast:
Big Data 2020 - thanks to many changes,
including the creation of the "Internet of Things", by 2020 human and
machine analysis of large amounts of data will improve the quality of public,
political and economic research. The development of the phenomenon called
"Big Data" will create the possibility of the emergence of
"science casting" (broadcast events in real time), the development of
"deductive software", analyzing the data structure in order to
predict possible results, and creating algorithms for advanced correlations
that give a new understanding of the world. In general, the presence of Big
Data is a benefit for society.
39% agreed with the second statement:
Big Data 2020 - thanks to many changes,
including the creation of the "Internet of things", by 2020 human and
machine analysis of large amounts of data will create more problems than it
will solve. Having a big amount of data will give a false confidence in our
ability to predict the future, which will lead many people to serious and
painful mistakes. Moreover, power-endowed people and institutions will use the
analysis of Big Data to achieve their selfish goals, manipulate research
results to justify their actions. The Big Data is also harmful, because it
serves the interests of the majority (sometimes not quite accurately),
belittling the importance of the minority and ignoring important outsiders. In
general, the arrival of the Big Data is a disadvantage for the society.
Respondents could not choose both scenarios. A
significant number of participants emphasize that, although they chose either a
positive or a negative scenario, the real situation in 2020 will combine the
features of both.
2015: Gartner excluded "Big Data" from
popular trends
October 6, 2015, it became known about the
exclusion from the Gartner report "Hype Cycle for Emerging Technologies
2015" information about big data. Researchers have explained this by the blurring of
the term - the technologies included in the concept of "big data"
have become the daily reality of business. The company explained its
decision by the fact that the concept of "big data" includes a large number of technologies that
are actively used in enterprises, they are partly related to other popular
areas and trends and have become a daily working tool.
Gartner
"Hype Cycle for Emerging Technologies 2015"
The topic of big data has not disappeared, but has been
transformed into many different scenarios:
Examples include precision farming, systems
for countering fraud, systems in medicine, which allow diagnosing and treating
patients at a qualitatively new level.
One of the main trends now is the Internet of
things, which allows to connect machines with each other (machine-to-machine).
In May 2015, Andrew White, vice president of
research at Gartner, reflected in his blog: "The Internet of Things
will overshadow the big data, as too focused technology, it can generate
several more effective solutions and tools, but it will be the Internet of
things that will become the platform of the future that will increase our
productivity in the long term."
Similar ideas were earlier - according to the
results of the Gartner report for 2014, published Forbes columnist Gil Press: "We live in an era when it is important not
only to be able to accumulate information, but to extract business benefits
from it. The first were industry, which directly
work with the consumer: telecommunications and banking, retail".
The reality of using big data, according to the Cisco Connected
World Technology Report, is
that most
companies collect, record and analyze data. Nevertheless, the report says, many
companies in connection with Big Data face a number of complex business and
information technology problems.
Companies collect and use data of a wide
variety of types, both structured and unstructured. Here are the sources from
which the Cisco Connected World Technology Report data:
• 74% collect current data.
• 55% collect historical data.
• 48% take data from monitors and sensors.
• 40% use the data in real time, and then erase them. Most
often, real-time data are used in India (62%),
USA (60%) and Argentina (58%).
EMC: Big Data forecast and real-time analytics
will be combined
In the nearest future we will get acquainted with a new chapter in
the history of development of "big data"
analytics as the two-level processing model develops. The first level will be a
"traditional" Big Data analysis, where big data sets are not analyzed in real
time. The new, second level will provide the ability to analyze relatively
large volumes of data in real time, mainly through in-memory analytics. In this
new phase of Big Data development, technologies such as DSSD, Apache Spark and
GemFire will be as important as Hadoop. The second level will offer us at the
same time new and habitual ways of using "data lakes" - for
"analytics on the fly" with the aim of influencing events, while they
occur. This opens up new business opportunities
You can find more article here: http://haneenalansari.blogspot.com/2017/04/what-is-future-of-digital-marketing.html
You can find more article here: http://haneenalansari.blogspot.com/2017/04/what-is-future-of-digital-marketing.html
Good anlycis
ReplyDeleteHistorical data vs current data shows a huge discrepancy but only 55% will be taken into account.
ReplyDelete