The launch, which includes the Advisory Board for Former Government Military Lights, has released a new platform designed to assess the safety and deployment of artificial intelligence applications. Early system implementers are said to include the U.S. Air Force and the Department of Homeland Security.

The platform is called VESPR, CalypsoAI, founded in 2018, headquartered in Silicon Valley, Dublin, and officially an ‘undisclosed’ location in Virginia – Terra firm for CIA Langley.

VESPR is a risk management model system (MRM) designed to facilitate a federally compliant accreditation system for the algorithms used. It provides both a user-friendly dashboard-style GUI environment and a CLI interface for more advanced use.

Click to enlarge. Source: https://www.youtube.com/watch?v=lMhS6j7t2pI

VESPR was established for CalypsoAI machine learning validation, verification and accreditation standards, and has handcrafted competitive machine learning libraries. It also provides automated stress testing routines for possible algorithms to be implemented.

National Artificial Intelligence Research Resource Group

The timing of the publication may be related to yesterday bring to the markets By the Biden Administration, a new National Artificial Intelligence Research Resource Group, a body designed to serve as a federal advisory committee to Congress National Artificial Intelligence Initiative Act of 2020.

In the United States and the rest of the world, there has been increasing pressure on meaningful regulatory standards for machine learning systems, especially in mission-critical areas such as key infrastructure and military use. Because ML systems continue to withstand the formation phase and rapid rate of progress, they represent a relatively unstable and often controversial solution from which it is now necessary to identify reproducible and reliable analytical algorithms – if this proves possible.

In April, CalypsoAI sent its support Endless Frontiers Act, a congressional law aimed at reforming science funding due to China’s growing importance as artificial intelligence, even though the act was eventually implemented dilute at the Senate stage.

Validation of Federal Artificial Intelligence

According to VESPR Press release, the framework covers, inter alia: machine vision and natural language processing (NLP).

CalypsoAI argues that the VESPR was created “with a critical contribution from existing national security customers and emerged from years of independently funded competitive machine learning.”

The system images shown in the promotional video (see end of article) appear to include detection and / or simulation routines for data poisoning and noise spraying by providing simulations to potential attackers on deployed systems.

Click to enlarge.

It appears that the system makes use of historical data from both national and foreign activities. Target groups include “Protests” and “riots” as well as the less clear “Strategic Development”. National terrorist incidents also appear to be included in the system’s reference databases. Violence against civilians is another available target group. Other available target categories are ‘Battles’ and ‘Explosions / Remote Violence’.

Click to enlarge.

It appears that the system allows protection of the properties of the “BIAS Management” part of the configuration, which is apparently designed to prevent over-installation or to avoid the unwanted removal of small anomalous events that may be of interest to the analytical routine. In the video, VESPR discusses tabular data from Ukraine.

In addition to this initial advertising campaign, it is unlikely (perhaps as planned) that we are going to hear much more about this SaaS product aimed at the government, even if it costs; it shares its name with the cafe franchise service, social dating app and streaming album, and the VSEPR chemistry model has been struck back.

CalypsoAI received $ 13 million In the A fundraiser from Paladin Capital Venture Group on July 0, 2020. Other investors with 8VC, Lockheed Martin Ventures, Manta Ray Ventures, Frontline Ventures, Lightspeed Venture Partners and Pallas Ventures.

CalilsoAI founder Neil Serebryany, who began unspecified research at the Department of Defense in 2018, published the company in a blog post. notes that the company was set up as a possible solution to the government’s fear of introducing advanced algorithmic systems in an unregulated climate:

” The primary reason for this fear of AI projects, which leads to their rejection within the government, sounds prosaic, but is actually quite complex. They were rejected due to lack of quality assurance […] Artificial intelligence models cannot be evaluated in the same way as traditional software models. This is due to the underlying nature of the model structure and the very complex ways in which they can fail. Because government organizations lacked a mechanism to evaluate these non-deterministic systems in a deterministic, verifiable manner, they were unable to evaluate the so-called The quality of artificial intelligence models compared to the benchmark. This led to fears that they might fail, may malfunction, or be hacked by an opponent when they are most needed, for example in combat, flight, or during a complex medical procedure. ‘

Advisory Board

A month before the investment round, the company created national security Advisory Board including Tony DeMartino, a former assistant to Secretary of Defense Jim Mattis and now the founder of a strategic consulting firm based in Washington Pallas Advisors; Former Under Secretary of State for Intelligence (led by President Trump) Kari Bingen; former CIA Deputy Director of Digital Innovation Sean Roche, a former cyber intelligence expert at that organization; and Michael Molino, The Executive Vice President of Business Development at ASRC Federal, providing advisory, research, and technical migration capabilities in a variety of critical federal services.

According to the publication:

‘VESPR provides advanced artificial intelligence testing capabilities and a streamlined workflow to ensure that all machine learning algorithms put into production are secure. VESPR provides unparalleled security and safety for many AI systems from computer vision to natural language processing. The VESPR process ensures testing, evaluation, verification and validation (TEVV) throughout the life cycle of secure machine learning, from the research and development phase to the implementation of the model. The end result is artificial intelligence systems that provide accurate and comprehensive monitoring and reporting of model features, vulnerabilities, and performance. ”

The promotional video for VESPR is shown below:

LEAVE A REPLY

Please enter your comment!
Please enter your name here