Mocking & Mockito

For the final blog entry for this class, I decided to cover a topic that I’ve had personal experience with but wanted to review for further review and understanding, mocking and specifically, mockito. I found an article on by Ivan Pavlov about mockito that is a guide for everyday use of the tool, exactly what I was looking for! The article has sections that includes some in depth analysis that could likely be used as a more in depth documentation for several aspects of mockito. The article also includes some insight into what mocks and spies are in the context of testing, unit testing specifically as well as some examples of the mockito tool in action.

For unit tests, we are testing the smallest unit of code possible for validity. The article states that the dependencies are not actually needed because we can create empty implementations of an interface for specific scenarios known as stubs. A stub is another kind of test double, along with mocks and spies, which are the two test doubles relevant to mockito. According to the article “mocking for unit testing is when you create an object that implements the behavior of areal subsystem in controlled ways.” They basically serve as a replacement for a dependency in a unit test. Mockito is used to create mocks, where you can tell mockito what you want to do when a method is called on a mocked object.

A spy for unit testing is another test double used by mockito. A spy, unlike a mock, requires a non-mocked instance to “spy on”. The spy “delegates all method calls to the real object and records what method was called and with what parameters”. It works as you would expect a spy to, it spies on the instance.  In general, mocks are more useful than the spy as they require no real implementation of the dependency as the spy does. The spy is also an indicator that a class is doing too much, “thus violating the single responsibility principle” of clean code.

Mocking with mockito can be used in a situation where you want to test that an object is being assigned a specific value when a method is called on it and certain parameters are met. A mock does not require the implementation of the methods to be written yet as you can assign values when a method is called by the power of mockito. This is why mockito is such a popular testing tool and mocking is such a popular testing strategy, and why I’ll continue to utilize it whenever relevant.

Link to original article:



Boundary Value Testing

For this week’s blog post, I wanted to cover a topic that we covered earlier this semester, boundary value testing. I found an article on seemingly my go to site for articles for this blog, guru99. The article has several sections including a brief definition on what boundary testing is, as well as a definition of what equivalent class partitioning is and the relationship between the two. There are also a couple of examples and an analysis section that explains why we should use equivalence and boundary testing.

The article describes boundary testing as “the process of testing between extreme ends of boundaries between partitions of the input values.” These extreme values include the lower and upper bounds, or the minimum and maximum acceptable values for a given variable. For boundary testing, you need more than just the minimum and maximum values though. For the actual testing, you need a value that is just above the minimum, a nominal value that lies somewhere in the middle of the range, and a value that is just below the maximum value.

Equivalent class partitioning is explained by the article as “a black box technique (code is not visible to tester) which can be applied to all levels of testing.” It also states that when using equivalent class partitioning, “you divide the set of test condition into a partition that can be considered the same”.  This may sound a bit confusing but it is actually pretty simple. The article uses an example of a ticket system where values 1-10 are the only acceptable values and values 11-99 are invalid. You could break up this set of numbers into two different equivalent class partitions and then returning back to our boundary testing concept, we would test values like 0, 1, 5, 9, and 10.

The best part about boundary testing is that because it is a form of black box testing, you do not need to see how the actual code works. You only need to know the specifications for the valid and invalid numbers. Another reason to use boundary testing is that it eliminates the need for testing every value and thus, minimizes the tests needed. I have and will continue to utilize this testing strategy throughout my software engineering career.

Link to original article:


Mutation Testing

For today’s blog post I wanted to cover mutation testing. I felt that this was a topic that we went over in class but felt like I may have missed something due to the short window of time that we had to review the topic. I found an article on guru99, a site that is becoming a go-to site for me, that is specifically about mutation testing. The article has several sections including a general overview of what mutation testing is, different types of mutation testing, and the advantages and disadvantages of mutation testing.

The article states that mutation testing “is a type of white box testing which is mainly used for unit testing.” White box testing implies that the code is inherently visible and known to the tester, or the person writing the mutation tests in this particular case.  The article states that the goal of mutation testing is to “assess the quality of the unit tests which should be robust enough to fail mutant code.” This opens up the conversation of what is this mutant code you speak of? The mutant code is a slight altercation to the original, and intended code such as changing a conditional > to a <. One of the most important aspects to note is that when performing mutation testing, only one altercation should exist for each test case. In the example I gave where you could change a > to a <, this would be the only change that would be made in the entire code that is being tested.

The article states that mutation testing is useful (the advantages) because “It is a powerful approach to attain high coverage of the source program,and it has the “capability to detect all the faults in the program.” Mutation testing finds any instances where a test should be failing but it is not. One of the biggest faults of mutation testing is that is nearly impossible to maintain without an automation tool, as stated in the article. The number of possible mutations to attain full coverage is not realistic to attain without automation.

I could see myself using mutation testing in the future as a means of attaining full coverage for testing. It is definitely a powerful means of testing if it can feasibly be implemented.


Link to original article:

Testing & Agile

For this week’s blog post, I decided to find a blog post on a topic that we haven’t explicitly covered in class. I found an article on Agile Testing and my interest in Agile as a whole drew me in. This article is on QASymphony and breaks down what Agile methodology is, some examples of Agile testing, and how to align testing with the Agile delivery process. For the sake of this post, I will assume you’re familiar with the general concept of the Agile methodology for software development, hopefully you understand the general concept of Scrum and Kanban.

One of the interesting testing strategies for Agile development is the Behavior Driven Development tests (BDD). These tests are similar to the Test Driven Development (TDD) style testing systems in traditional waterfall development cycles. They are basically a replacement. Instead of writing unit tests before code is written, the BDD tests are on a much higher level. This is how user stories are written. The development of the code is based on end-user behavior and the tests need to be readable for those who might not be particularly technical as they can often replace requirement documentation. This saves time in the long run as there is no duplication of the process for those who might not be able to read user tests in the traditional TDD style of testing. The best part about BDD testing is that the tests do not necessarily need to be written by technical team members. These can, and are often written with input from business partners, scrum masters, and product owners who might not necessarily be able to contribute when it comes to writing unit tests in the TDD format. This style also allows for testing small snippets of functionality like TDD so that one aspect of TDD that is somewhat Agile remains intact for BDD testing.

After reading through the other testing strategies and how to align testing with the Agile methodology, I realized that I’ve already been working in systems that operate like this at my summer internships. We operated our testing in a format pretty much identical to the concept of BDD where everyone in the team contributed to the different testing obstacles and even came up with test cases that would need to pass by code that was written afterwards. And like the article says, it was common for not all of the tests to pass immediately, and this is part of the concept of “failing fast” that Agile is all about. I am glad I found this article to give me more insight to how businesses around the world, including the ones I interned at, are converting to Agile and utilizing these testing strategies if they haven’t already.

Link to original article:


Exploring Decision Tables for Software Testing

For this week’s blog post, I decided to choose a subject that would give me a little more experience with a technique for black box testing, or developing tests for software without direct access for the code for a particular application. This way, the tests are created with a bigger focus on the business logic instead of particular barriers that may arise within a programming language or the overall stack of the application. I found an article on that covers the usefulness of decision tables in regards to software testing. The article discusses how to use decision tables as well as why they are such an important tool for black box testing. The article also mentions other examples of testing techniques for black box testing.

The author, Venktesh Somapalli, provides an example of a financial application where there are two possible conditions for a user, repayment within a term, or moving on to the next term of a loan. Somapalli goes through the steps that a tester may consider constructing a decision table for this particular application. The table for the decision table technique is based on conditions either being true or false, so all outcomes are absolute. This means that there are there should only be one outcome for a set of conditions. However, the reverse of this is not true, there are can be more than one set of conditions that have a particular outcome. In the example in the article, if both of the conditions are true or false, an error message is the outcome. On the other hand, there is only one set of conditions that processes money, and one that processes the loan term.

My favorite part of this testing technique is that it is inherently non technical due to the nature of black box testing and the simplicity of the concept. This means that non technical people within a business can understand and even develop test cases using this technique. As Somapalli points out, this technique is also versatile because it can be applied to any set of business logic. Somapalli also notes that decision tables are iterative meaning that if any new conditions are added to the logic, the existing table can be reused and revised to consider the new logic without a complete reconstruction. I definitely agree with his arguments for the usefulness for this technique. I have used this technique in the past and before reading this article, I didn’t even consider the powerful nature of this versatile strategy. I will definitely continue to make use of this technique when developing test cases whenever possible throughout my career.

Link to the original article:

Testing in Try/Catch Blocks

For today’s blog post, I wanted to cover a specific topic that I don’t particularly enjoy. While that may sound counterintuitive, I feel that it is important to repeatedly work on skills, even if they aren’t your favorite to practice. So for today’s post, I found an article on about using try/catch blocks in testing. I don’t particularly enjoy try/catch blocks even though they are incredibly useful and necessary. The author explains multiple scenarios that may occur with try/catch blocks, ensuring that false positives are avoided. She dives into the scenarios, using pseudocode to illustrate what a method may look like for each scenario, and examines how tests must be constructed for each scenario to test the proper criteria.

One of her examples, the scenario where a test should only pass if an exception is not thrown is probably my favorite. She explains that the try/catch block isn’t really needed in this scenario because if we are testing that an exception is not thrown, code without a try/catch block would fail just the same as the try/catch block. This is one less scenario that I’ll have to use the dreaded try/catch blocks.

In my opinion, the most confusing of the four scenarios is how to write a try/catch block where a test should only pass if the exception is thrown. The author explains that what must be done to ensure your test works properly is to have a marker in the try block that will execute if and only if the code that is supposed to throw the exception, doesn’t throw the exception. When executing this try/catch block, the only time that this marker will execute, is when your test is failing. The author explains this in a way that is clear, and easy to follow even for those of us who have such a dislike for try/catch blocks.

In all honesty, I’m glad that I chose this article. I try (no pun intended) to avoid using try/catch blocks as much as a I can, as they can be confusing to me at times. But like I said in the beginning of this post, it is extremely important to practice skills and possibly even more important to practice skills that you don’t particularly like. I will definitely consider this article the next time I’m forced to incorporate try/catch blocks in a program I am writing.

Link to the original article:

TS>JS (For Scaling)

Before this class, I had heard of Angular before and had always heard it referred to as a JavaScript framework. I guess technically it is, because it still compiles to JavaScript but the language itself is written in the JavaScript superset, TypeScript. I had never heard of TypeScript before this class and the idea of a superset language was also a new concept. I was curious why Angular was written in TypeScript over JavaScript or other supersets. I know from outside sources that the first iteration of Angular was written in JavaScript but every iteration of the framework since, has been written in TypeScript and I wanted to know why. This post by Victor Savkin lays out the benefits of using TypeScript as a way of explaining why Google chose TypeScript for Angular.

Victor clears up my lack of understanding of what a superset is relatively simply. He explains that being a superset of JavaScript, everything you can do in JavaScript, you can do in TypeScript. With small changes to the code, a “.js” file can be renamed a “.ts” file and compile properly.

Victor emphasizes the lack of scalability in vanilla JavaScript. With JavaScript, large projects are much more difficult to manage for many reasons including the weakly typed aspect and the lack of interfaces. TypeScript solves these problems and helps create better organized code with its syntax. With its better organization, TypeScript is much easier to read through compared to JavaScript.

The lack of interfaces in JavaScript is a major reason why it doesn’t scale well for large projects. Victor uses a code example in JavaScript that shows that aspects of a class can be lost in JavaScript because of its lack of explicitly typed interfaces. The code is then reshown in TypeScript and is much more clear because you can actually see the interface. It’s that simple, in JavaScript, you have to really focus in on how every object interacts with each other because of the lack of explicitly typed interfaces. TypeScript allows you to create these interfaces so future code maintenance is less stressful and bug filled.

TypeScript offers explicit declarations, which is illustrated through some code samples in the article. The JavaScript code is a simple function signature that lacks any typing or clear information about the variables involved in the function. In the TypeScript version, it is clear what the types of the variables are, making the function much more clear in TypeScript.

There are other reasons defined in this article on why TypeScript was chosen and what makes it so great, but these are a few of the ones that stuck out to me. I’ve written JavaScript projects before and wondered how some of these concepts would affect larger projects, and now I know how they can be resolved. This article and information will prove useful for writing my final project and any future TypeScript-based projects that I decide to take on in the future.

Original Post:


Revisiting Polymorphism

This week I found this article on JavaWorld by Jeff Friesen that is entirely about polymorphism. Polymorphism is a core concept for all software development and a lack of strong understand will lead to weak applications and poorly written code. One can never revisit topics like polymorphism too much. This article discussing the four types of polymorphism, upcasting and late binding, abstraction, downcasting, and runtime type identification (RTTI). The article presents everything in the form of a code example and constantly revisits in the explanation process, which I find incredibly useful.

The first part of the article is an overview of the four types of polymorphism, coercion, overloading, parametric and subtype.. Of these four there are some bits that stuck out more to me than others. For coercion the obvious example is when you pass a subclass type to a method’s superclass parameter. At compilation, the subclass object is treated as the superclass to restrict operation. The simple example that is forgotten is when you perform math operations on different “number” data types like int’s and floats. The compiler converts the int’s to float’s for the same reason as the subclass to superclass object. Another piece in this first section that stood out to me was the overloading type of polymorphism. The obvious example here is overloading methods. In a class, you can have different method signatures for the same method name and each will be called depending on the context. Again, there is a simple example that is overlooked in the operators like “+”. For example, this operator can be used to add int’s, float’s, and concatenate Strings depending on the types of the operands.

The upcasting section uses our favorite example, different types of shapes and is almost entirely written in reference to one line of code, Shape s = new Circle(); This upcasts from Circle to Shape. After this happens, the object has no knowledge about the Circle specific methods and only knows about the superclass, Shape methods. When a shared method (implemented in both Shape & Circle), the object is going to use the Shape implementation as well unless it is clear through context to use Circle’s.  Similar to this section that uses superclasses and subclasses, is the section on abstraction. The article uses the Shape and Circle example again but points out that having a draw() method that does something in Shape does not make any sense from a realistic standpoint. Abstraction resolves this, meaning that in an abstract class, there is no way to instantiate a method like draw() for the superclass of Shape. This also means that every abstract superclass needs subclasses to instantiate their behaviors.

These are just a few parts of the article that stood out to me personally. The article as a whole is incredibly useful and worthy of a bookmark by every developer. I will continue to revisit this article if I find myself lacking confidence about the topic of polymorphism in the future!

Original Article:


What is DRY code? I’ve heard the term before in some tutorials but never knew the meaning behind the phrase. I figured if I keep seeing it, it must be important. After searching around the web for a bit, I finally found a great article on what DRY/WET code is on softwareyoga. The article explains what DRY/WET means, the advantages of DRY code, and the precautions to take when considering this principle. DRY code means “Don’t Repeat Yourself” and WET means “Write Every Time”. Obviously, DRY is the one everyone should wish to achieve for most cases. The advantages that the article goes through are maintainability, readability, reuse, cost, and testing.

DRY helps make code more readable because logic is not repeated throughout multiple classes. Instead methods/functions are called and what is likely hundreds of lines of code is represented by a few function calls. Readers of the code base can find the logic, read it once, and understand how it functions. Then when they see it implemented, they do not need to double check that the logic is the same. If code is WET, the logic will be re-written entirely, forcing readers to ensure nothing is different in each implementation.

Reuse and Cost go well together as reuse lowers the cost of development. When reuse is the goal/option, cost is inherently lowered. These go along with maintainability as reuse means that the code is properly maintained. If code can be reused, this means that small changes to the logic are less likely to cause problems. There would also be less time spent refactoring code in the future if previous implementations are written with DRY in mind. This means that less time, and developers are needed to maintain a long-lasting codebase.

Testing is simple, if the source code is written DRY, writing the tests DRY only makes sense. This also makes tests easier to write, and easier to change in the future.

The cautions for DRY expressed in the article are rather self-explanatory. One of the pre-cautions is to not over-DRY your code. Basically, this means to only use DRY when it makes sense to use DRY. If the code doesn’t need to take advantages of what DRY offers, it likely doesn’t need to be written with DRY in mind.

DRY is a great principle to follow when writing code. It teaches good practice and can be applied to most of the design patterns that we are learning. Most of the design patterns build upon each other and do so to keep code DRY. Implementations are written without rewriting logic and instead only require simple method calls. I hope to keep my code DRY in the future. If I do so, I will find it much easier in the long run to both add to, and learn from my previous projects.

Here is the link to the original article:

What makes frameworks so cool?

This week, I decided to tackle the idea of frameworks. I personally have messed with Bootstrap, Spring, and Node/Express. Even with some experience tinkering around in these frameworks, I still did not quite comprehend why they are such a required skill to develop in their respective languages.  I chose this article because everywhere you look in the software development world, everything is about the latest framework. This can be from blog posts, tech articles, and most importantly, job postings. Everyone is expected to know a major framework for the language that is listed as a required skill. This article I found on InfoWorld, tackles what makes frameworks so powerful and why they are the foundation of the future of software development.

Probably the biggest point that this article is trying to make is that syntax does not really matter anymore. One of the secondary points to back this is up is the idea that architecture should be the focus instead of the minute details of the syntax of a language. The focus should be on how to utilize existing libraries/frameworks by reading the documentation and figuring out the little details as you go. Personally, when I first started writing code, I focused excessively on the syntax of Java instead of understanding data structures themselves. This is a good example because most of the data structures we use in practice are part of the Collections framework within Java. A strong understanding of this framework has helped me write better code more efficiently.

Another secondary point that the article makes to back up the idea that syntax is dying is the growing area of visual languages. This was completely new to me, as I would not really consider visual languages to be part of the software development process. It is hard to ignore the growth in products like SquareSpace, Wix, and tools like AndroidBuilder. While Wix and SquareSpace are not exactly what the article is referring to, I feel that it is important to consider these tools regarding visual languages. These tools alleviate the need for developers for small business owners who only need simple websites/web applications. I’m not too familiar with AndroidBuilder, but from the article, I can gather that this is more of a tool for a developer to manipulate. I do agree with the article that while visual languages will continue to grow, they will never replace the traditional means of creating applications. This does however, mean that they diminish some of the need for learning nitty-gritty syntax.

These are just a couple of the seven reasons that the author feels that frameworks are becoming the new programming languages. I hope to use the core ideas of this article when tackling frameworks, including Angular.js which we will be working with shortly. Considering my minimal experience writing Javascript applications, this will be necessary if I want to be successful for my final project. Hopefully I can translate my new knowledge into productivity.


Here is the original article: