Google has unveiled an improved technology that makes it easier and faster to research and develop new fast-paced algorithms.

This will allow Google to quickly create new anti-spam algorithms, improve natural language processing, and deploy related algorithms and get them into production faster than ever.

The improved TF ranking coincides with the dates of the latest Google updates

This is of interest because Google has released several anti-spam algorithms and two basic algorithm updates in June and July 2021. This development directly followed the release of this new technology in May 2021.

The timing may be a coincidence, but given all the new versions of the Keras-based version of TF-Ranking, it may be important to familiarize yourself with it so that we understand why Google has increased the pace of releasing new ranking-related algorithm updates.

A new version of the Keras-based TF investment

Google released a new version of TF-Ranking that can be used to improve nervousness in the placement of algorithms as well as in natural language processing algorithms such as BERT.

Ad

Continue reading below

It’s a powerful way to create new algorithms and validate existing ones, so to speak, and do it incredibly fast.

TensorFlow -ranking

According to Google TensorFlow is a machine learning platform.

In the YouTube video released in 2019, the first version of TensorFlow Ranking was described as follows:

“The first open source in-depth learning library for learning the LTR scale on a scale.”

An innovation of the original TF-Ranking platform was that it changed the placement of relevant documents.

In the past, the relevant documents were compared in so-called pairs. The probability that one document is related to the survey was compared to the probability of another object.

This was a comparison between document pairs and not a comparison of the whole list.

The innovation of TF-Ranking is that it made it possible to compare the entire list of documents at once, which is called multi-point scoring. This approach allows for better investment decisions.

Ad

Continue reading below

Improved TF placement enables rapid development of new efficient algorithms

An article published on Google’s AI blog says the new TF-Ranking is a major release that makes it easier than ever to define learning to classify (LTR) models and get them into real-time production faster.

This means Google can create new algorithms and add them to search faster than ever.

The article states:

“Our original Keras grading model has a brand new workflow that includes a flexible ModelBuilder, a DatasetBuilder for determining training data, and a set of data for training the Pipeline model.

These components make building a custom LTR model easier than ever and make it easier to quickly find new model structures for production and research. “

TF investment BERT

When an article or research paper states that the results were marginally better, warnings are issued and it is stated that further research was needed, indicating that the algorithm under consideration may not be in use because it is not complete or deadlocked.

That is not the case TFR-BERT, A combination of TF-Ranking and BERT.

BERT is a machine learning method for handling natural language. It’s a way to understand search queries and website content.

BERT is one of the most important updates from Google and Bing in recent years.

According to the article, merging TF-R and BERT optimizes the order of the list revenue generated “significant improvements. “

This statement that the results were significant is important because it increases the likelihood that something like this is currently in use.

It follows that Keras-based TF-Ranking made BERT more efficient.

According to Google:

“Our experience shows that this TFR-BERT architecture offers significant improvements in the performance of a pre-designed language model, leading to top-level performance in a number of popular ranking positions …”

TF-Ranking and GAMs

There is another algorithm, called Generalized additional models (GAMs) that TF-Ranking also improves and makes an even more powerful version than the original.

One of the important things about this algorithm is that it is transparent because all things related to creating an investment can be seen and understood.

Ad

Continue reading below

Google explained the importance of transparency as follows:

“Transparency and interpretability are important factors in the adoption of LTR models in investment systems that can be involved in determining the outcomes of processes such as loan eligibility assessment, ad targeting, or guiding medical treatment decisions.

In such cases, the contribution of each individual characteristic to the final investment should be researchable and comprehensible in order to ensure transparency, accountability and fairness of results. “

The problem with GAMs is that it was not known how this technique could be applied to classification-type problems.

To solve this problem and to use GAMs in the ranking regulation, the TF-Ranking technique was used to create a nervous order Generalized Additive Models (GAM) that is more open to website ranking.

Google calls this, Interpretable learning for investment.

Here’s what the Google Artificial Intelligence article says:

“To this end, we have developed a nerve sequence from generalized additives to GAM extension to modeling investment problems.

Unlike conventional GAMs, the neural GAM can take into account both the properties of the placed objects and the context properties (e.g., a query or user profile) to obtain an interpretable, compact model.

For example, the image below uses nerve placement GAM to show how distance, price, and relevancy to a particular user device affect a hotel’s final ranking.

Neural ranking GAMs are now available as part of TF-Ranking. “

I asked Jeffery Coyle, founder of MUSE for AI content optimization technology, TF-Ranking and GAM.

Ad

Continue reading below

Jeffrey, who has a background in information technology and decades of experience in search marketing, noted that GAMs are an important technology and improving it was an important event.

Jeffrey Coyle shared:

“I’ve spent the most time researching nervous-order GAM innovations and their potential impact on context analysis (queries), which has been a long-term goal for Google’s scoring teams.

Neural RankGAM and related technologies are deadly weapons for personalization (especially user data and contextual information such as location) and analysis of intentions.

With keras_dnn_tfrecord.py as a public example, we get a glimpse of innovation at a basic level.

I recommend everyone to check the code. “

Gradient Enhanced Decision Trees (BTDT)

Beating the standard in the algorithm is important because it means that the new approach is an achievement that improves the quality of search results.

In this case, the standard is slope-reinforced decision trees (GBDT), a machine learning technique with several advantages.

Ad

Continue reading below

But Google also explains that GBDT also has disadvantages:

“GBDTs cannot be directly applied to large discrete property states, such as Raw Text. They are also generally less scalable than neural placement models. “

In a research paper entitled Are nerve investors still better with gradient-enhanced decision trees? researchers state that nervousness in the placement of models is “significantly worse for “… wood-based implementations.

Google researchers used the new Keras-based TF investment to generate the so-called Data Augmented Self-Attentive Latent Cross (DASALC) model.

DASALC is important because it can meet or exceed the latest baseline levels:

“Our models are able to work with a relatively strong wood-based baseline, but they are better than the recently released nerve-wracking methods for placing large margins. Our results also serve as a benchmark for nerve-wracking to place models. “

Keras-based TF investment accelerates the development of investment algorithms

An important point is that this new system will accelerate the research and development of new ranking systems, which includes identifying spam to rank them in search results.

Ad

Continue reading below

Article ends:

“Overall, we believe the new Keras-based version of TF-Ranking will facilitate the conduct of neural LTR testing and the introduction of production-level classification systems.”

Google has been innovating faster and faster in recent months. Several spam algorithm updates and two basic algorithm updates in two months.

These new technologies may be the reason why Google has released so many new algorithms to improve anti-spam and website ranking in general.

Quotes

Google Artificial Intelligence Blog Article
TF investment progress

Google’s new DASALC algorithm
Are nerve investors still better with gradient-enhanced decision trees?

Official TensorFlow website

TensorFlow Ranking v0.4.0 GitHub page
https://github.com/tensorflow/ranking/releases/tag/v0.4.0

Hard example hard_dnn_tfrecord.py

LEAVE A REPLY

Please enter your comment!
Please enter your name here