What is a bio-inspired knowledge graph?

What is a bio-inspired neural graph?

Nature as inspiration for neural graphs

The design of bio-inspired neural graphs draws heavily from the structure and functionality of biological neurons and the central nervous system. The human brain, with its approximately 86 billion neurons and 100 trillion connections, operates with remarkable efficiency despite its slower processing speed compared to modern processors.

This efficiency is attributed to the brain's well-designed neural network structure, which avoids expensive algorithms due to energy constraints. Similarly, bio-inspired neural graphs utilize a structure that inherently stores relationships between data points, enabling faster data lookup and more efficient computation, akin to how biological neurons transmit information through synaptic junctions.

Addressing modern data challenges with bio-inspired neural graphs

In today's data-driven world, transforming raw data into actionable knowledge is a significant challenge. Traditional machine learning techniques, while effective, often fall short in addressing all the complexities and may introduce new issues, such as high energy consumption and computational inefficiency.

Bio-inspired neural graphs offer a solution by efficiently storing data in a compressed and interconnected form. This approach reduces the effort required for dataset adjustments and allows for a wide range of machine learning tasks, including classification, regression, clustering, pattern mining, and recommendations, to be performed more effectively using dedicated in-place algorithms.

What is a neuron in a neural graph?

Each node in the graph, which may represent an object, entity, feature value, pattern, element in a sequence, and more, can be considered a neuron. This is why the structure of a neural graph can be likened to a neural network.

Neurons can have direct values that provide them with semantic meaning, such as 'red color' or '$150' (sensory neurons). However, semantic meaning can also be indirect, defined by the structure of associations. For example, a neighboring neuron might represent 'John' through its associations (data neurons), or a neuron might represent a '4-3-3 formation pattern' in a football match by being associated with 'team A' but not 'team B' (pattern neurons).

In addition, every neuron has its internal state and a list of specialized connections. While the detailed structure of a neuron can be complex, the following elements are sufficient:

  1. activation,
  2. counter,
  3. priority.

[fs-toc-omit]Activation

Activation is crucial for many neural network-based algorithms. It can be used to propagate user-specific data as neural activations through the graph, enabling the computation of recommendations.

[fs-toc-omit]Counters

Counters play a significant role in various aspects, such as information compression, identifying discriminative features, and applying statistical and information theory methods directly within the graph.

[fs-toc-omit]Priority

Priority, inspired by biological neurons, reflects the varying importance of neurons (often indicated by their size). This concept enhances the efficiency of associative algorithms.

Biological neurons are far more intricate than the typical neuron models used in deep learning. A nerve cell is a physical object that operates in both time and space. In contrast, computer models of neurons usually exist as n-dimensional arrays designed to occupy minimal space for efficient computation on modern GPGPU (general-purpose computing on graphics processing) systems, often ignoring space and time contexts. Despite being significantly simplified, these neuron models aim to draw as much inspiration from nature as possible.

Neuron associations: types and roles

Associations form relationships between neurons. Only specific types of associations are allowed, including the following:

  • defining,
  • explanatory,
  • sequential,
  • inhibitory,
  • similarity.

The limited subset of possible associations simplifies the construction of efficient and dedicated graph-based data mining and machine learning algorithms. Semantic meaning is assigned to the nodes themselves, making associations simpler and more predictable.

The only parameter these associations can take is weight. Weighted connections are powerful and integral to most algorithms. This concept is also biologically inspired, as neuron cells have complex regulatory structures at synaptic junctions—where an axon from one neuron connects with a dendrite from another neuron. These junctions enable the transmission of information between neurons.

Sensory fields: how to work with raw data

Sensory fields are data structures designed to work with raw data. They fetch raw signals from the environment and store them in the appropriate form within sensory neurons. These sensory neurons connect to other neurons representing entities (data neurons) and frequent patterns (pattern neurons), forming layers of abstraction in the neural graph, as shown in the figure below.

Sensory fields are represented by a dedicated data structure called ASA-graphs. They handle numbers and their derivatives (e.g., dates, texts) in an associative manner. ASA-graphs store and query data so efficiently that they can replace conventional data structures like B-trees, RB-trees, AVL-trees, and WAVL-trees in practical applications such as database indexing. From some perspective, every piece of data loaded into GiQ is transformed into a dedicated in-memory graph database, specialized in data analysis through dedicated, associative algorithms.

ASA-graphs are complex structures, but their key features are quite straightforward:

Values aggregation - provides data compression and emphasizes valuable relationships between data.

Values counting - useful for calculating connection weights and supporting various algorithms.

Awareness of neighbors - utilizes dedicated, weighted connections to adjacent sensory neurons within the sensory field. This represents vertical relationships within the sensor and enables fuzzy search, fuzzy activation, and very fast queries.

Search tree construction - constructed analogously to a B-tree, allowing fast data lookup. Feature values act as keys, and sensory neurons act as values (similar to how a sensory cell is often the outermost part of the neural system in biology). The sensory neurons are to some extent independent of the search tree and are a part of the neural graph.

Graph structure and knowledge forming

Depending on the use case, the information level can be managed by the graph structure, which holds the context necessary to interpret incoming data.

Knowledge formation is an emergent process that occurs as the graph grows, with more neurons becoming connected through various relationships. The knowledge level can be modeled by the graph structure, which may be static (unchanging in its internal structure) or dynamic (capable of changing its internal structure), depending on the problem being addressed.

Applicable knowledge is the next step that emerges as more neurons become interconnected. This represents some sort of evaluated understanding, an intelligent feedback loop where understanding is continuously refined through real-world applications and results evaluation. This stage may also involve the introduction of external systems that cooperate to solve complex problems, further enhancing the graph's ability to manage and interpret information.

Bio-inspired neural graphs in GiQ

Bio-inspired neural graphs, as leveraged by GiQ – a comprehensive data analytics platform – significantly enhance data processing and decision-making. These graphs streamline machine learning, data mining, and big data analysis by efficiently representing raw data and eliminating the need for extensive processing.

[fs-toc-omit]Machine learning

Efficient raw data representation in a neural graph is one of the most important requirements. Once data is loaded into sensory fields, no further data processing steps are needed. Sensory fields automatically handle missing or unnormalized data (e.g., vectors within vectors). Symbolic or categorical data types, such as strings, ar e equally possible as any numerical format. This eliminates the need for one-hot encoding or similar techniques. Symbolic data can be manipulated directly, allowing associative pattern mining to be performed in place without any pre-processing.

This approach can significantly reduce the effort required to adjust a dataset to a model, as is often necessary with many modern methods. Additionally, all algorithms can run in place without any additional effort. Nearly every typical machine learning task, including classification, regression, pattern mining, sequence analysis, and clustering, is feasible.

[fs-toc-omit]Data mining

The associative nature of GiQ's neural graphs allows you to explore non-obvious relationships and uncover hidden insights. With powerful frequent pattern and association rules mining algorithms, GiQ enables rank lists, entity mapping, grouping, clustering, and much more. This comprehensive approach ensures that even the most subtle connections within your data are revealed, facilitating better decision-making and data-driven strategies.

[fs-toc-omit]Graph database

Bio-inspired neural graphs act as an efficient database engine. Several experiments have demonstrated that for queries involving complex join operations or those heavily reliant on indexes, the performance of the graph can be orders of magnitude faster than traditional RDBMS like PostgreSQL or MariaDB. This remarkable efficiency is made possible by the structure of sensory fields. Data lookup operations are as fast as those for indexed columns in RDBMS.

The impressive acceleration of various join operations can be easily explained: there is no need to compute relationships, as they are inherently stored within the graph's structure. This exemplifies the power of the algorithm-as-a-structure approach.

[fs-toc-omit]Big data

Bio-inspired neural graphs treat data as first-class citizens, regardless of its size. As more data is loaded, the graph structure becomes increasingly saturated, leading to a logarithmic memory consumption curve. This efficient memory usage ensures that even large datasets can be handled effectively. Moreover, the ubiquitous algorithm-as-structure approach becomes even more relevant for large data sets, where even simple operations can be resource-intensive.

By embedding algorithms within the graph structure, neural graphs optimize performance and reduce the computational cost of data processing, making them ideal for handling vast amounts of information efficiently.

GiQ uses cookies

This website uses cookies to improve its user experience and provide personalized content for you. We use cookies for web analytics and advertising.You can accept these cookies by clicking "OK" or go to Details in order to manage your cookies preferences more precisely. To learn more, check out our Privacy and Cookies Policy

GiQ uses cookies

Essential website cookies are necessary to provide you with services available through the website, autosave your settings and preferences, and to enhance the performance and security of the website - you have the right not to accept them through your web browser's settings, but your access to some functionality and areas of our website may be restricted.

Analytics cookies: (our own and third-party : Google, HotJar) – you can accept these cookies below:

Marketing cookies (third-party cookies: Hubspot, Facebook, LinkedIn) – you can accept these cookies below: