Chinese researchers have developed a method for protecting copyrighted image files machine vision training, effectively “watermarking” data images and then decrypting “clean” images via a cloud-based platform for authorized users only.

System tests show that training a machine learning the design of copyrighted images catastrophically impairs the accuracy of the model. When testing the system with two popular open source imagery, the researchers found that the accuracies of pure data sets from 86.21% and 74.00% can be reduced to 38.23% and 16.20% when attempting to train models in unencrypted information.

From Paper – examples from left to right of clean, protected (i.e., disturbed) and restored images. Source: https://arxiv.org/pdf/2109.07921.pdf

This allows for the widespread public distribution of potentially high-quality, expensive data sets and (presumably) even semi-polluted data set demo training to demonstrate the approximate effectiveness.

Cloud-based data set authentication

The paper is from researchers in the two departments of Nanjing Aeronautics and Astronautics University and is designing a routine use of the Dataset Management Cloud Platform (DMCP), a remote authentication framework that would provide telemetry-based pre-launch validation similar to that common in heavy local installations such as Adobe Creative Suite.

Flow and framework of the proposed method.

Flow and framework of the proposed method.

A protected image is created contains space disturbances, The opposite method of attack developed in 2019 at Duke University of North Carolina.

Feature mode perturbations perform an activation attack in which the properties of a single image are pushed toward the property state of the opponent image.  In this case, the attack forces the machine learning identification system to classify the dog as an airplane.  Source: https://openaccess.thecvf.com

Feature mode perturbations perform an activation attack in which the properties of a single image are pushed toward the property state of the opponent image. In this case, the attack forces the machine learning identification system to classify the dog as an airplane. Source: https://openaccess.thecvf.com

Next, the unmodified image is embedded in the distorted image using block pairing and block transformation, as proposed in 2016 paper The data to be translated is hidden in the encrypted images by inverse image conversion.

The sequence containing the block pairing information is then embedded in the temporary interstitial image using AES encryption, the key of which is later retrieved from the DMCP at the time of authentication. The The least significant bit the steganographic algorithm is then used to embed the key. The authors refer to this process as Modified Reversible Image Transformation (mRIT).

The MRIT routine is essentially the inverse of decryption, and the “clean” image is returned for use in exercises.

Testing

The researchers tested the system ResNet-18 an architecture with two sets of data: the 2009 work CIFAR-10containing 6000 images in 10 categories; and Stanford TinyImageNet, a subset of the ImageNet rating challenge data, which includes a 100,000-image training dataset, as well as a 10,000-image validation dataset and a 10,000-image test suite.

The ResNet model is trained from scratch in three configurations: clean, protected, and decrypted. Both datasets used the Adam optimization tool with an initial learning rate of 0.01, a batch size of 128, and a training period of 80.

Training and test accuracy are based on cryptographic system tests.  Small losses are observed in the training statistics of inverted (i.e., unencrypted) images.

Training and test accuracy are based on cryptographic system tests. Small losses are observed in the training statistics of inverted (i.e., unencrypted) images.

Although the document states that “the performance of the model in the returned data set has not been affected”, the results show small losses in the accuracy of the returned data compared to the original data, from 86.21% to 85.86% in CIFAR-10 and 74.00 to 73.20% on TinyImageNet.

However, given that even small changes in sowing (and GPU hardware) can affect training results, this seems to be a minimal and effective compromise for IP protection against accuracy.

Protective scale of the model

Previous work has focused mainly on real machine learning models with IP protection, assuming that training data itself is more difficult to protect: a Japanese study in 2018 provided a method embed watermarks in deep neural networks; offered previous work since 2017 a similar approach.

A 2018 initiative IBM conducted perhaps the most in-depth and committed study of the potential of watermarks in neural network models. This approach differed from the new study in that it sought to incorporate irreversible watermarks into training data and then use neural network filters to “reduce” data interference.

IBM system for neural network watermarks

IBM’s neural network system for “ignoring” watermarks is based on protecting those parts of the architecture that are designed to recognize and reject watermarked information. Source: https://gzs715.github.io/pubs/WATERMARK_ASIACCS18.pdf

Piracy vector

While the search for encryption frameworks for IP-protected datasets may seem like an advanced case in a machine learning culture that still depends on open source review and information sharing among the global research community, continued interest in privacy protection algorithms instead of.

The new study does not add random perturbations to image data, but rather formatted, forced transitions in property mode. Therefore, the current set of watermark removal and image enhancement computer vision projects could potentially “restore” images to a higher quality perceived by humans without actually removing the disturbances that cause misclassification.

In many computer vision applications, especially those involving markup and entity identification, such illegally returned images would likely continue to cause a classification error. However, in cases where image transformations are a core goal (such as face creation or deep applications), algorithmically restored images may still be useful in developing functional algorithms.

LEAVE A REPLY

Please enter your comment!
Please enter your name here