One of the requests we hear quite a bit is, ‘How can I get fresh data to use for my tests?’. There are a couple of typical answers to this question:
1) You take a cut of production data.
This approach has multiple problems – getting access to data, anonymizing it, and then resetting the data in your environment every so often. It gets even more difficult with GDPR coming into force in 2018, making the exposure of personal data a much higher risk to any business.
2) You manually create a data set.
This approach, while providing lower risk, has its own complexities: how much time will you spend creating data? How much data can you produce with the time and resources available to you?
Given the issues with these approaches, we’ve been looking at opportunities to help our customers manage the issue of creating data for tests and virtual services.
In our 220.127.116.11 release, we have added a ‘Test Data’ entry to the dashboard.
Selecting this entry will launch our experimental Test Data Fabrication service.
Test Data Fabrication currently has two simple modes of operation:
1) Get Data – use the record definitions that you have defined to create a fresh set of test data.
2) Define Data – specify the record format that you want to generate and provide the seed data for the engine.
Each option has a ‘guide me’ help page to walk you through the options available to you. Rather than create a blog that is simply a re-write of the help content, we want to show how you can currently take advantage of this capability by solving a problem.
We recently came across this tweet from an IBM researcher in the UK.
Fabricating sample passwords based on a given dictionary is something our Test Data Fabrication service can do in its sleep. So, let’s look at what we need to do to build this capability.
First of all, we need to acquire a dictionary and specify a field to hold its contents. For this example, we’ll keep the dictionary very small:
For a simple dictionary like this we’ve just used a plain text file. Of course, for larger dictionaries, you can upload JSON or CSV files. For this configuration, we simply need to specify the field name and the path to the dictionary:
Once we click ‘Create’, the field is created and the link to the file on disk. We now have a field we can use to build record definitions from (note the link to the file is dynamic, meaning that if we add more items to the file later, they will automatically be available to the Test Data Fabrication engine).
Next let’s create a record definition for a password. In the Define Data view, we can specify a new ‘Record Type’ by providing a name for the record and then adding the fields we wish to use. In this case we create a ‘pwdDict’ record and because we want multiple words in our password, we add the dictionary field a few times.
So, those steps just created our record definition for a simple dictionary based password. Let’s try to create some sample password data!
On the ‘Get Data’ page we simply select the record type we wish to generate data for, specify the number of records to generate and optionally provide a seed value to the engine (seed values allow the same data set to be re-created over and over again if required).
Click ‘Download’ and we are prompted to download a CSV file with our generated entries.
Here is our generated set of potential password combinations from our small dictionary:
Obviously for such a small dictionary we expect to see a lot of repetition of values, adding more entries to the dictionary will grow the fabrication space significantly.
We don’t have the time in this blog post to write the required python library bindings for Dale’s specific use-case, however as he’s a techie we trust that it’s enough for him to know that all these capabilities are exposed through OpenAPI5 documented REST APIs:
So, that was a simple walk through using Test Data Fabrication to solve a simple problem. How might we take advantage of this on a more complex project? Well, we ship a few sample records and fields with the experiment, which when combined can cover a number of interesting user oriented scenarios.
Where we go from here is up to you. We obviously have a number of enhancements we are planning to make in this area over time. However, we want to hear from you: How would you like to see this develop? What would make your testing easier? Feel free to reach out to the team here: firstname.lastname@example.org or talk to your client representative who will be happy to pass on your feedback.
Functional & Performance Test
Connect with me on LinkedIn
Chris Haggan is the HCL Product Manager, Functional & Performance Test Automation and a member of the World Wide Testing team with over 14 years in the integration testing and service virtualization space. He has worked with numerous clients ensuring that when they modernize and connect their disparate IT systems be it through EAI, SOA or API's, they are successful and efficient.