Deep Neural Network: Freeda

Freeda is the proprietary deep neural network algorithm that governs and processes the audio feedback and determines the quantity of FD3 tokens to disburse to the players by crawling through various factors including but not limited to the following:

☑️ Number of words used in the recording (Length)

☑️ Emotions expressed in the recording

☑️ Accuracy and precision in the recording

☑️ Authenticity and sincerity in the recording

Let’s take an example where Player A records monotonous audio feedback- “The game was good” and Player B records detailed audio feedback with excitement in the voice- “The game should have an M203 grenade launcher to make it more competitive and fun to play”. In this case, Freeda will disburse Player B with more FD3 tokens because the feedback has more length, accuracy, precision, and expression of emotions.

Freeda adopts technology that is quite similar to the one employed by the insurance and banking institutions for investigating the authenticity and sincerity of the audio recording. Freeda also takes into account some special parameters including vocal intonation, voice emphasis, and speaking pace while processing the audio.

“The Users Don’t Have to Plug in Any Special/Exclusive Equipment or Hardware to Record the Audio Feedback”

Feed3 Natural Language Processing capability allows the users to record and submit the audio feedback in plain English which is interpreted, processed, classified, and supplied to the product development team at a lightning-fast speed. In this way, the NLP capability allows Freeda to fulfill the purpose of transforming the feedback into actionable insights that can help the blockchain game or metaverse project to move in the forward direction.

Freeda is currently powered by 1,208 hours of NLP customer conversation AI and has observed the integration of computational linguistics and multiple models including Machine Learning and Deep Learning. These models and some advanced functionalities (as illustrated below) have been embedded to make the audio recording experience smooth and hassle-free for the users and the development team involved in the feedback loop.

Speech Processing

Speech processing helps Freeda to analyze, track, and extract multiple elements (visible and hidden) in the sound frequency and waveform content of the audio recording including Emotions, Keywords, Length, Sentiment, Knowledge, and Confidence.

The extraction of these elements from the speech (audio recording) is crucial for Freeda to determine the authenticity of the feedback for deciding the quantity of the FD3 tokens to release in favor of the user or gamer submitting the feedback.

Audio Classification

Integration of Feed3 technology will motivate the users to submit feedback in expectation of rewards in the form of FD3 tokens. This will eventually land the development team with a ton of data to analyze making it difficult for them to sort through this information and implement suitable actions.

For this purpose, Freeda is incorporated with the audio classification algorithm that collects specific keywords from the audio recording (as extracted from Speech Processing) to identify the class/category to which such audio feedback belongs.

There will be multiple classes including Menus and UI, Rank Progression, Weapons, NFTs, Vehicles, Gameplay, In-Game Events, In-Game Features, Tournaments, Misc., and many more. This will allow channeling the feedback to suitable team members of the development team for quick issue resolution.

Onset Detection

The onset Detection algorithm enables Freeda to identify the beginning of the transient in the audio recording. This transition in the audio recording is usually identified in the form of a change in the voice frequency, a sudden burst in the energy, a change in the short-time spectrum, a change in the statistical properties of the audio, and many other factors.

The Onset Detection algorithm responds to a situation in which the user or a player records multiple opinions in a single recording. Let’s understand this with the help of an example where the player records an opinion- “The Angel GT is not available to drive in the in-game tournaments and the garage has very limited customization options for this car”.

In this example, the Onset Detection algorithm will be triggered and Freeda will identify two opinions in this recording:

Opinion 1: “The Angel GT is not available to drive in the in-game tournaments”

Opinion 2: “The garage has very limited customization options for this car”

When identifying the onset of the second opinion in the above recording, Freeda effectively segments the recording into two small semantics. This segmentation helps to properly categorize the recording with multiple opinions into multiple relevant classes.

In this case, Opinion 1 can be categorized into the class- In-Game Tournaments, and Opinion 2 can be categorized into the class- In-Game Features.

This will help to forward the issue resolution directly to those team members who are responsible for the development and maintenance of the particular class, resulting in a faster issue resolution.

Noise Cancellation

To make the life of the product development team easy, the Freeda is embedded with noise cancellation technology and a robust AGC algorithm that eliminates unwanted background noise and disturbance when users or players record the audio feedback.

This allows the development team to have access to crisp and clear audio feedback for effective understanding and solution implementation.

Lightning Speed Data Transfer

The users and players interacting with Freeda while recording the audio feedback will not have to wait for long while sharing it with the development team. If connected to a high-speed internet connection, then it will consume only a few microseconds for the data to get processed and reach the team.

Anti-Scripting Mechanism

The Anti-Scripting mechanism shall be integrated to prevent the misuse or exploitation of Feed3 technology, especially from the bots that are becoming common day by day. The Anti-Scripting mechanism would require the users to verbally speak the randomly-generated text displayed on the screen to verify that they are humans. If the player successfully solves this challenge, then only the FD3 tokens will be disbursed to such a player.

If a player is not able to complete the task after 3 failed attempts, then the account will be flagged and put into the HOLD status for 24 hours. The team will investigate the activity and review the account for the time being and if found suspicious, then it will be permanently banned.

Last updated