I was working on the inventory management system, personally feeling I was approaching the end of the project. I had sent the section I had completed up for testing and review. It had five main sections Add Material, Check Stock Levels, Check-in, Check-out and Dispose. Each seemed to work fine on my machine, but not one the sections worked on the person testing it. Something clearly wasn’t right, so I had a look at the script, I didn’t account for a new user using the software and it completely fell apart at the first hurdle. To be completely honest I had not accounted for many of the ways a user could use my software. This was a big nono, I had to go back to my script and make some changes.
This time I needed to do it right. I needed to try and try all user use cases; predict all outcomes the user could do with my software. Basically, I tried to break it and cause it to crash. I found multiple issues in my manual testing, popups not closing, website scrolling stopping from working, weird graphical glitches, incorrect data showing, and so many grammar and spelling error. It was kinda humiliating finding all these issues that seem so obvious that I glanced over the first time, but it shows the important and correcting and reviewing your own work. Hopefully this is lesson learned and I will try to reflect on my work in the future.
In my manual testing extravaganza, I came across a glaring problem with the software we had not noticed. In our testing we were using ten or twenty items in our database for our software to filter through. But when this project is finished the user would more than likely using thousands of items at a time. To even test this in our software needed a way to create a lot of data for our software to use. We were directed towards using a JavaScript package called faker. Faker is used to “generate massive amounts of realistic fake data.” It can generate a bunch of different things like company names, addresses, commercial products, UUID’s, names and lots more. It had what we needed, and I wrote up a seeder script to generate one hundred thousand products and chuck them into our database. It was data we could use for testing and at a glance they looked like real products, it did generate great weird product names such as, Awesome Concrete Mouse, Tasty Cotton Chair, Tasty Granite Cheese and my favorite, Intelligent Wooden Tuna. Really weird I know but I like them.
So, we got to test the software with our newly generated data, and it brought the Inventory management system to a halt. The scripts we wrote were trying to filter and render too many items, it was able for twenty or thirty, but not one hundred thousand. The site was crashing and running really slow, my computer was overheating, and my desk was on fire (the fire is a bit dramatic but everything else happened.) After a lot of research and looking into the cause of the issue we had come up with a couple solutions to the problem. After delving into all options we pinpointed what options worked for us. Two big changes were brought in, pagination and filtering in the backend.
Pagination is kinda what it sounds like (even though I struggle to actually pronounce it), instead of showing and rendering all one hundred thousand items at the same time, it breaks them up into pages. So, page one will show and render the first hundred items, then the next page rendered the next hundred pages and so on and so and so on. It worked great, it loaded quickly, and it was easy to navigate to each page.
We also have a search bar on the top of the site so the user can search for a specific item. This would then trigger the script to filter through all the items with the word the user was looking for. This can take time and be very processor heavy, especially on phones and tablets. So rather than drain the users processing power and try filter all these items, we instead do it in the backend. So when I say do it in the backend, the user will type the word they need to search for, that will be sent to the server the website is hosted on, it will use the servers superior processing power and then send back the results of what the user was searching for. This sounds like more steps than needed, but when you are dealing with so much data like we are it is more effective to do it that way.
Yeah so in recap, reflect and test your own work to the point where it can be considered harassment. Thank you for reading my rants!