News & Media
Tests Validate Automated Protocol Translator Vision of Solving System Interoperability Issues
In a previous blog post, we outlined Rite-Solutions’ vision for its Automated Protocol Translator (APT): addressing interoperability challenges of mixed-system environments. We see the APT accelerating the integration of disparate components and systems. In turn, this reduces the time to market and costs of new products and solutions. So, how real is the vision of APT as an open-systems enabler? In this post, we will highlight some initial (and very encouraging) results.
To test how well the APT’s auto-generated translator code performs, we mirrored a real-world production environment. Under the stresses of a simulated, but realistic, environment, a software component would (1) send information using its existing protocol, (2) have that information translated, and (3) send the translated information using a different protocol to a different system component.
We performed several types of functional, endurance, and performance tests. These tests translated different types of real-world Common Object Request Broker Architecture (CORBA) events to and from Advanced Message Queuing Protocol (AMQP) and Google Protocol Buffer (GPB) messages to evaluate a range of translations, from simple to complex.
Below, is a summary of the test procedure that we used:
- First, we took advantage of a timesaving APT feature that allowed us to import existing, production-ready CORBA Interface Definition Language (IDL) or events and GPB definitions for AMQP messages into the tool.
- Next, we added information that maps event/message fields defined in each of the protocols and used the APT tool to auto-generate the translator code.
- Usually, the next step in the development process is software/system integration. In our case, integration was greatly simplified because no software changes were necessary to the existing production software.
- Finally, we choose to run the auto-generated code in a separate Virtual Machine (VM). Doing so allowed us to evaluate resource utilization and performance of the auto-generated software and collect metrics.
With the procedure in mind, let’s turn our attention to the test results.
The functional tests demonstrated that we performed the translation between the different protocols correctly. The auto-generated code translated the protocols correctly. However, what was much more interesting were the results of the performance and endurance tests.
The endurance tests ran for various periods of time. During the the longest test, which was about 65 hours long, we translated close to 2 million messages without a single problem. Even we thought that was pretty impressive! The performance test results were outstanding, too. The average translation time in the production test environment was less than a few milliseconds for one of the more complicated translations—well within the production system’s performance requirements. We are still testing different types of events and messages. So far, we have been extremely pleased with the preliminary results.
So, what does this all mean? It means the APT works as it was originally conceived and is fulfilling the expectations outlined in our Small Business Innovation Research (SBIR) program application. We validated our initial concept in a real-world environment. Our APT tool and its ability to auto-generate software not only works, it works extremely well.
We are excited about extending support to other protocols and IDLs used to exchange information. We also plan to use the APT on a much broader and larger scale. Despite the APT’s technical success, the story still isn’t finished. Our next blog post will review the business case for the APT. Making comparisons to current, manual methods, we will show how the APT reduces the time and cost of integrating software and systems. Spoiler alert: Those results are outstanding, too!