Collective Intelligence and Big Data: definition, examples in business

In the age of Big Data, the collective intelligence of human beings can generate a lot of data to solve some of the major problems of humanity.

Likewise, it allows certain data to be analyzed more efficiently than computer algorithms. Discover the close relationship between collective intelligence and Big Data.

Collective Intelligence: Definition
The notion of collective intelligence refers to a group intelligence, or a shared intelligence emerging from collaboration, collective efforts or competition between several individuals . It allows decisions to be made by consensus. Voting systems, social networks, and other methods of quantifying mass activity can be considered as collective intelligences .
This type of intelligence appears as an emerging property of the synergy between knowledge offered by data, software, computer hardware, and experts in specific areas, to make better decisions at the right time. More simply, collective intelligence results from the association between humans and new ways of processing information .
A widespread concept

The concept of collective intelligence has sociology, in computer science, but also in the field of business. For Pierre Lévy, it is a form of universally distributed intelligence, which is constantly improving, coordinating in real time and resulting in an effective mobilization of skills . The foundation and purpose of this form of intelligence is the mutual recognition and enrichment of individuals rather than the worship of hypostatized communities. In the eyes of Pierre Lévy and Derrick de Kerckhove, it refers to the capacity of computer technology networks to deepen the collective pool of social knowledge by simultaneously extending the scope of human interactions.
It contributes greatly to the transition of knowledge and power from the individual to the collective. According to Eric S.Raymond and JC Herz, open source intelligence will eventually yield superior results to the knowledge generated by proprietary software developed within companies. For Henry Jenkins, she is an alternative source of media power. The latter criticizes schools and educational systems in particular, promoting independent problem solving and individual learning. However, it remains hostile to learning through this bias. In spite of everything, like Pierre Lévy, he considers that it is essential for democratization, because it is linked to the culture based on knowledge and fueled by the sharing of ideas. In fact, the latter contributes to a better understanding of a diverse society.

Origin of the concept of collective intelligence

The concept of collective intelligence dates back to 1785. It is at this time that the Marquis de Condorcet emphasizes that if each member of a group has a higher probability of not making a correct decision, the probability of the majority vote of this group the correct decision increases with the number of members of the group. This is the jury theorem.
Another precursor of this concept is the entomologist William Morton Wheeler. According to his remarks dated 1911, apparently independent individuals can cooperate to the point of becoming a single organism, a collective intelligence . The scientist perceived this collaborative process by observing the ants, acting as the cells of an individual entity.
In 1912, Emile Durkheim identified society as the only source of logical human thought. According to him, society constitutes a superior intelligence because it transcends the individual in space and time . In 1962, Douglas Engelbart established the link between collective intelligence and the efficiency of a company. According to him, three people working together to solve a problem will be effective more than three times compared to one person.

Collective Intelligence in the Age of Big Data

In the age of Big Data, many businesses tend to look for answers to their question where they are easy to search, rather than where they are likely to be found . In reality, the likelihood that a Big Data research group will discover useful information depends on the type of data available. Structured, numerical, explicit and smooth data will be more easily processed by computers, while unstructured, analogous and ambiguous data makes more sense for human brains.
However, for a human as for a computer, the more data sets are consistent, the more computing power is important . In the case of structured data, more powerful computers will do the trick. On the other hand, for unstructured data, it will be essential to rely on the collective intelligence of many human brains.
If the goal is to predict the future, a Big Data statistical approach is particularly flawed. Indeed, the available data are necessarily rooted in the past . In fact, these data can certainly predict situations similar to those of the past, for example for a mature product line in a stable market, but become useless for forecast

Be the first to comment

Leave a Reply

Your email address will not be published.


*