Not sure about upgrading your test automation and Service Virtualization software? We will explain why it is good to upgrade.
As the Client Advocacy Manager for the testing products at HCL, I am often asked about software upgrades for our products, when to do them, and why. I caught up with Matt Tarnawsky from our HCL Product Management team to answer these questions.
Marianne: Matt, our testing products had new and exciting announcements in 2017. Can you tell us more about the new capabilities?
Matt: Of course. We put out several new releases, with a lot of exciting functionality that was inspired by our clients, support and service teams.
The first major release of the year was version 9.1. For users of our integration testing and Service Virtualization tools, this release included the ability to accelerate the development of their tests by importing Swagger (OpenAPI 3) or WSDL definitions from the Developer Portal of IBM API Connect v5. It also included updates to our capabilities for deploying virtual services into Docker containers by extending the range of transports available within containers. We also provided updates for the latest MQ, WAS, and z/OS technologies, to stay in sync with our customers’ changing test environments. Finally, we improved the management of RTCP environments with the option to delete environments through the UI, Ant tasks, the command line, and the REST API.
For functional testing, we provided a range of new capabilities. The first thing we did was to provide the ability to script not just against the contents of the browser window, but also against the browser itself, opening up new avenues to gain better automated test coverage over web-based applications. Another way that we extended the range of testing that could be done was by updates to our recording tools, where we made it easier to record right-click actions. We added a range of other functionality, like the ability to use Firefox and Chrome on Mac OS, and to copy associated test variables when copying part of a script to a new script. Possibly the most exciting development in this area, though, was the ability to distribute tests across multiple machines, running them in parallel, which allows us to provide much faster feedback to development teams.
In the performance test world, we improved recording by filtering out unnecessary domains after HTTP and service test recording completes (making it easier to identify the transactions you need to work with). We added new capabilities for HTTP and SOA test extension such as the HTTP verb PUSH functionality, and enhanced reporting, for example with summary and trend reports.
Anyone who’d like more details can read our 9.1 blog.
Marianne: Matt, we have spoken with many clients about providing common integrations across the test portfolio. Our next release after 9.1 was 126.96.36.199. Can you talk about that version and what was delivered with that release?
Matt: Sure, Marianne. We’ve talked to a lot of customers for whom continuous testing is increasingly important. For that reason, the major piece of work we did for the 188.8.131.52 release was to make sure that all of our tools were available through Ant and Jenkins plugins. The former means testers can use Ant to integrate their tests with a wide range of Continuous Integration (CI) platforms. The latter means development teams can run test cases from Jenkins, one of the largest CI platforms in use today.
Another major concern is security, and so we also added Smart Card authentication support so that our tools can now connect to Rational Quality Manager using enhanced authentication types.
There were also a range of enhancements to supported versions of browsers and application servers. Again, for more details, check out our 184.108.40.206 blog.
Marianne: This is all great news for testers, Matt! Another couple of areas we have spoken to clients about are reporting and executing integration tests in a performance schedule or compound test. Can you tell us about those features that were delivered in version 9.1.1?
Matt: Yes, we took the first steps in providing a central repository of tests across the workbench by adding unified reports for functional testing and performance testing; this makes it easy to view and share test results from your web browser, rather than the workbench. Users can read more about Unified Reporting in this blog.
And to your second point Marianne, performance testers now have a much wider variety of protocols available to them. It’s now possible to take an API test from our Integration Tester product and turn that into a performance test using Performance Tester, combining all of the protocols available in Integration Tester with the rich performance test schedules and reporting available through Performance Tester. We have more information about this feature in this blog.
Marianne: Both of those features are great efficiency improvements for our clients. Matt, 9.1.1 had some other exciting enhancements. Can you share some of those highlights?
Matt: Of course. For integration testing and service virtualization, we focused on improving our support for mainframe technologies. We added support for shared queues on MQ, recording channel-based requests through the CICS Transaction Gateway, using COBOL copybook REDEFINES when doing COBOL data files and app testing, and using HTTPS to communicate between the CICS DPL intercept and RTCP. We have a 9.1.1 blog about these mainframe features.
By the way, with 9.1.1 all communications with the Test Control Panel are secure by default. Clients can read more about HTTPS in this blog.
When building functional test automation, it’s now possible to record and playback test scripts using Microsoft Edge browser; it’s also possible to use datapool encryption to encrypt confidential information, such as a set of passwords or account numbers that are used during a test. Functional tests can now also be integrated into UrbanCode Deploy on Linux. You can read more in this blog.
For performance testing, testers can record and playback an app that uses WebSocket protocol, assign datapool values to multiple tests by using a Datapool Mapper, and record and playback tests that use OData protocol. We have another blog about these features too.
Marianne: Matt, we have just released version 220.127.116.11. What can you share about that release?
Matt: The 18.104.22.168 release has been quite exciting for us, because it’s been an opportunity for us to share some of the work we’ve been doing internally around test data, and to get feedback from our customers. We’ve added an experimental application for test data fabrication to the Test Control Panel home page. This allows you to create custom test data records that are required by the system under test. It’s an early release of this new functionality, and we’re asking for feedback on it – if you use it, and have any thoughts you’d like to share, please get in touch with us at firstname.lastname@example.org. Further information on this capability is available here.
Aside from that, we added support for optical character recognition (OCR), which means that functional test scripts can use image verification capabilities on text in applications running in remote VDIs. We’ve also continued with our goals to integrate into more pipelines by providing a Maven plugin which can execute API tests and virtual services as part of a build pipeline.
Marianne: Matt, we have delivered a number of significant features this year. Surely, these are great reasons for clients to upgrade to the latest versions of our testing products?
Matt: Yes, first and foremost, anyone upgrading to the latest version will get access to everything we’ve just talked about. Additionally, of course, updates will also include incremental enhancements, fixes, and security patches. And of course, all of this is included in our software subscription service. Upgrading to the latest version also allows our users to see where we’re going and to give us feedback that can be passed on to the product management and development teams.
Marianne: Great points, Matt. Preventing problems before they occur by getting updates is critical to staying on schedule when testing a solution. And new features can really help with productivity. Providing feedback is really important, as well. So, when is the best time to upgrade?
Matt: That’s a great question Marianne. We know clients can’t always upgrade when the software is released. They might be doing upgrades on a schedule such as once a year or they might be in a critical stage of their testing. We do want to encourage clients to upgrade if they can. When they do, they’ll really get the value of what we offer.
Marianne: So, the bottom line is that if clients want to maximize the return on their software investment, they should upgrade their software when they can. But some clients are nervous about upgrading. Are there instructions and guidance on upgrading?
Matt: Yes, clients can find upgrade information including considerations and requirements in our documentation. While there usually isn’t any need to do anything special to migrate data, we do make sure we provide documentation if there are any exceptions to that.
Marianne: Great! So where do clients find out more information about planning and executing an upgrade?
Matt: Announcement letters provide information about what is new in a release. Fixes and releases are socialized too. Clients can follow us on Twitter @HCLProducts and can see the posts mentioned earlier for our recent releases.
Marianne: This is all great information Matt. Thank you for taking the time with me.
Offering Manager Service Virtualization, HCL Technologies
Matt Tarnawsky has been working for a number of years to improve software quality by helping his clients shift their testing to the left through the use of service virtualization and API testing. He is currently the product manager for API testing & service virtualization at HCL, providing the direction for IBM and HCL's Integration Tester and Test Virtualization Server products.
Marianne Hollier is an Open Group Master Certified IT specialist in application development. She has strong, practical expertise in measurably improving the software development lifecycle and driving the necessary cultural changes to make it work. Marianne is instrumental in architecting, tailoring, and deploying best practices and appropriate software development tools on many types of projects—from large to small, long to fast-track, agile to traditional. Marianne is passionate about all things testing—process, tools, culture, and automation. Her experience is broad-based, spanning both custom projects and standard software packages that apply to pharmaceutical, refining, telecommunications, healthcare, financial, automotive, government, and retail industries.