LazyProgrammers: Chapter 4

The Ugly


            The last chapter covered multiple ways that “laziness” in programming conforms to its traditional definition of work avoidance.  We examined, in detail, multiple practices that “muddy” the code base and litter it with technical debt.  In this chapter we will discuss the ramifications of these techniques on a large program over time.  In my career, I have been called in multiple times to rescue failing programs from the verge of cancellation.  After digging into the problems the root causes always turned out to be the same: poor design, poor practices and poor attitudes.  Miraculously, after you clean up the first two; the third usually improves on its own!  In this chapter we will examine three “ugly” consequences of those “Bad Lazy” practices we discussed in the previous chapter.

  1. Destroying Architectural Cohesion.

Avoiding code smells, avoiding technical debt and avoiding architecture has serious consequences for your code base.  Over time, your cohesive software system will degenerate into a “Big Ball of Mud”[1]  As defined by the coiners of the term (Brian Foote and Joseph Yoder) a big ball of mud is “a haphazardly structured, sprawling, sloppy, duct-tape-and-baling-wire, spaghetti-code jungle. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair.” 

We have known about the dangers of goto statements, spaghetti code and software entropy for many years.  Beginning with Dijkstra’s famous letter to the editor of ACM, “Go To Statement Considered Harmful”[2], there have been numerous warnings against the entropic slide of software as it ages.  The complexity of the software increases, the band-aids get put atop older band-aids, and more programmers shrug their shoulders at all the broken windows. 

But is this the fate of all software?  No, as is contrasted in Figure 3.  It is important to understand the difference between these two diagrams and how Lazy programmers contribute to the latter case.

Figure 3  Destroying Architectural Cohesion

Figure 3 has two diagrams, the first demonstrating a system with strong architectural cohesion and the second demonstrating how a system loses that cohesion and degenerates into a big ball of mud.  Let’s begin with the top diagram where a system has strong architectural cohesion between its components.  What does this mean? 

It means the components know where they fit into the overall system and they “stay within their lane”.  A well-behaved component does its function efficiently, effectively and with good test coverage.  A well-behaved component does not shift its burden to other components and gracefully handles both peak loads and error conditions (like network latency, storage failures, out of memory exceptions and other exceptions).  A well-behaved component has “clean lines” and carries out its task in the simplest manner possible.  Going back to our definition of a big ball of mud we see the word “expedient repair” as a chief culprit in the degeneration of the software.  The rules of a well-behaved component are frequently violated in the name of “expediency”.  Schedule pressure is the “go to” excuse of the Lazy Programmer.  The next favorite excuse is “efficiency” but by efficiency they mean “their efficiency” and not the code’s efficiency.  In other words, labor-saving techniques for their personal benefit and not necessarily for the project’s benefit.  Every expedient solution to a problem leads directly to a shortcut which should be thought of as a cut on the architecture.  A cut between the lines.  A cut into the “design muscles” of the system.  And the system staggers which leads to more emergency repairs which leads to more expediency and more cuts.  This is the meaning of the phrase, “death by a thousand cuts”. 

Of course, this slow degeneration of a system can take years.  The only way to fix it is to have the courage to clean up the technical debt and to ruthlessly refactor those rogue components.  How do you know if your System is moving in the right or wrong direction?  By monitoring and measuring whether it regresses when adding new features.

  1. Slipping into System Regression.

A software system evolves via a continuous stream of enhancements to satisfy user requirements.  Each “careless cut” placed in the system by a lazy programmer weakens that part of the system.  For example, lazy comments make it harder for another programmer to understand the code.  Lazy logging makes it harder to diagnose errors.  Sloppy resource utilization stresses multiple parts of the system simultaneously which causes cascading errors.  When symptoms cascade to other symptoms they become far removed from the root cause of the problem.  And as the root cause festers, parts of the system that worked previously can suddenly fail (or collapse from strain as in Figure 4).

 

Figure4  System Regression Analogy[3]

A system regression typically follows the proverbial “one step forward, two steps back”.  In concrete terms, the software system regresses when a feature added to one part of the system breaks another part of the system.  Now, let’s cover how to prevent this from happening through robust, automated regression tests.

            Regression testing can be performed in a manual approach (via a functional tester) and in an automated fashion via unit tests for the backend and User Interface robots for the front-end.  In the context of Lazy programming, we are only concerned with the creation of unit tests by software developers as part of their normal development process.  If programmers are most concerned about saving time or finishing as soon as possible (possibly due to schedule pressure), then testing can be an afterthought.  In contrast, proponents of Test-Driven Development (TDD) recommend writing the tests first.  The real danger in writing unit tests is to treat them like a checkbox that just needs to be done for each new class or method that you created.  This “checkbox approach” is only concerned about finishing and not at all concerned about how to write tests well.  So, let’s focus the next example on how to test to insure you cover edge cases and corner cases.  An edge case focuses on one variable on the extreme end of a boundary (aka “the edge”).  A corner case focuses on more than one variable (in the same way that a corner is the intersection of two boundaries).

            Listing 8 sets the stage for our unit tests by offering a simple but common case in need of good testing: validating the input parameters to a method.  Listing 8 is a simple program to load an image into a frame with a method to generate a sub-image given a rectangle.  (Note: since this example is long, it is condensed here and the full source code listing is in Appendix 2).

Listing 8. ImageTool

package us.daconta.lazy.programmers.examples;

// … imports removed for brevity, See Appendix 2

/**

 * @author mdaconta

 */

public class ImageTool extends JFrame

{

    private BufferedImage image;

    public ImageTool(String fullpath) throws IOException

    {

        this(new ImageIcon(fullpath));

    }

   

    // … Constructors and helper methods removed for brevity – see Appendix 2  
    // *key method* we want to test carefully is below …  

    public BufferedImage getSubImage(Rectangle r)

    {

        BufferedImage subImage = null;

        if (image != null)

        {

            Rectangle imgRect = new Rectangle(0, 0, image.getWidth(),  

                                                                        image.getHeight());

            if (r != null && imgRect.contains(r))

            {

                subImage = image.getSubimage(r.x, r.y, r.width, r.height);

            }

        }

        return subImage;

    }

    

    public static void main(String [] args)

    {

        try

        {

            ImageTool imgFrame = new ImageTool(args[0]);

            imgFrame.setVisible(true);

            int x = Integer.parseInt(args[1]);

            int y = Integer.parseInt(args[2]);

            int width = Integer.parseInt(args[3]);

            int height = Integer.parseInt(args[4]);

            BufferedImage subImage = imgFrame.getSubImage(new

                                                                           Rectangle(x,y,width,height));

            ImageTool subImageFrame = new ImageTool(subImage);

            subImageFrame.setVisible(true);

        } catch (Throwable t)

          {

              t.printStackTrace();

          }

    }

}

To write good unit tests for the ImageTool class, you need to write tests for each non-trivial method (unlike getters and setters or other methods with no input parameters).  For this example, let’s focus on just one method, getSubImage(), which takes a single parameter a Rectangle that represents the section of the original image you want to “extract” into a new image.  The way to think about unit testing is to focus on five different types of test cases: the common case, the boundary case, the existence case, the empty case and other data-type specific variants.  The bottom line is that writing good tests is hard.  You must think about each of these cases and come up with a test for each variant. 

            For our unit test, we need to think about all those cases in relation to the input data (a rectangle), the source data and the resulting sub-image.  While this example is simple, it offers some insight into the more generic case of checking for containment of a smaller entity.  It also represents an easy way to visualize edge cases and corner cases.  Listing 9 presents eleven tests for the getSubImage() method.

Listing 9. ImageToolTest

package us.daconta.lazy.programmers.examples;

// imports removed for brevity

/**

 * JUnit tests for ImageTool class.

 * @author mdaconta

 */

public class ImageToolTest {

   

    private boolean comparePixels(BufferedImage image1, BufferedImage image2) {

       int width = image1.getWidth();

       int height = image1.getHeight();

       int width2 = image2.getWidth();

       int height2 = image2.getHeight();

       if (width != width2 || height != height2) {

           return false;

       }

       for (int row = 0; row < height; row++) {

          for (int col = 0; col < width; col++) {

              if (image1.getRGB(col,row) != image2.getRGB(col,row))

              {

                  return false;

              }

          }

       }

       return true;

    }

    /**

     * Test of getSubImage method, of class ImageTool.

     */

    @org.junit.jupiter.api.Test

    public void testGetSubImage() throws IOException {

        System.out.println("Testing getSubImage");

        ImageTool instance = new ImageTool(

"C:\\Users\\mdaconta\\Documents\\ paycheck-programmer.jpg");

        BufferedImage image = instance.getImage();

        System.out.println("Test #1: null");

        Rectangle r = null;

        BufferedImage expResult = null;

        BufferedImage result = instance.getSubImage(r);

        assertEquals(expResult, result);

        /* Note: these would all be separately viewed and validated against a known

            image for correctness or even better, generated programmatically in a

            standard, synthetic test pattern. */

        BufferedImage upperEdge = image.getSubimage(0, 0, image.getWidth(), 1);

        BufferedImage lowerEdge = image.getSubimage(0, image.getHeight() - 1,

                                                                                      image.getWidth(),1);

        BufferedImage rightEdge = image.getSubimage(image.getWidth() - 1, 0, 1,

                                                                                     image.getHeight());

        BufferedImage leftEdge = image.getSubimage(0, 0, 1, image.getHeight());

        BufferedImage upperLeftCorner = image.getSubimage(0, 0, 1, 1);

        BufferedImage upperRightCorner = image.getSubimage(image.getWidth() - 1,

                                                                                                  0, 1, 1);

        BufferedImage lowerLeftCorner = image.getSubimage(0, image.getHeight() -  

                                                                                               1, 1, 1);

        BufferedImage lowerRightCorner = image.getSubimage(image.getWidth() - 1,

                                                                                       image.getHeight() - 1, 1, 1);

        

        System.out.println("Test #2: zeroes");

        r = new Rectangle(0,0,0,0);

        expResult = null;

        result = instance.getSubImage(r);

        assertEquals(expResult, result);  

       

        System.out.println("Test #3: negative");

        r = new Rectangle(-1,1,1,1);

        expResult = null;

        result = instance.getSubImage(r);

        assertEquals(expResult, result);   

        

        // upper edge

        System.out.println("Test #4: upper edge");

        r = new Rectangle(0,0,image.getWidth(),1);

        int expectedWidth = image.getWidth();

        int expectedHeight = 1;

        result = instance.getSubImage(r);

        assertResults(result, upperEdge, expectedWidth, expectedHeight);

         

        // …Tests 5 – 10  are eliminated here for brevity,  See Appendix 2
 

        // LR corner

        System.out.println("Test #11: LR corner");

        r = new Rectangle(image.getWidth() - 1, image.getHeight() - 1, 1, 1);

        expectedWidth = 1;

        expectedHeight = 1;

        result = instance.getSubImage(r);

        assertResults(result, lowerRightCorner, expectedWidth, expectedHeight);

    }

   

    private void assertResults(BufferedImage result, BufferedImage expectedResult,

                          int expectedWidth, int expectedHeight)

    {

        assertNotNull(result);

        assertEquals(result.getWidth(), expectedWidth);

        assertEquals(result.getHeight(), expectedHeight);

        assertTrue(comparePixels(result, expectedResult));

    }

}

As you can see in Listing 9, we use eleven tests to provide proper test coverage of the existence test (null), numeric tests (0 and negative), the edge cases and corner cases.  Another important reason to test edge cases is to prevent “Off-By-One” errors.  Off-by-one errors most often occur at the edges.    Finally, you should take note of the various helper methods (i.e. comparePixels() and assertResults() to improve the tests).  To complete the example, Listing 10 is the maven run of the test results.

Listing 10. Maven Run of Junit Tests

--- maven-surefire-plugin:2.12.4:test (default-test) @ lazy-programmers-examples ---

Surefire report directory: C:\Users\mdaconta\Documents\NetBeansProjects\lazy-programmers-examples\target\surefire-reports

-------------------------------------------------------

 T E S T S

-------------------------------------------------------

Running us.daconta.lazy.programmers.examples.ImageToolTest

Testing getSubImage

Test #1: null

Test #2: zeroes

Test #3: negative

Test #4: upper edge

Test #5: lower edge

Test #6: left edge

Test #7: right edge

Test #8: UL corner

Test #9: UR corner

Test #10: LL corner

Test #11: LR corner

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.307 sec

Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

As we stated before, edge cases and corner cases are hard!  Lazy programmers are typically only concerned about coverage, not the quality of their tests.  Checking boxes gives your team and the program at large a false sense of confidence.

            One final piece of advice in regard to unit testing is that whenever a bug in the software is found, you should immediately write a unit test that both demonstrates the bug and proves that when it is fixed, it is fixed forever.  Of course, that will be the case because your unit test will always test for it, in every new iteration of the software.

 

  1. Most Lazy programmers are paycheck programmers. 

At what point does a programmer’s desire to find an easier way become an excuse for avoiding the hard tasks (like design)?  At what point does automating drudgery become blinders to tasks that are worth drudging through (like tedious edge cases in unit tests)? 

I have seen the mantle of “good laziness” used as a badge of honor to conceal a basic lack of discipline – as if development should be easy, as if development should be fast, as if the programmer cannot bear the weight of a time-consuming task.  While the result of this is an expedient solution, the motivation for that expediency is simply the paycheck.  In other words, the project’s objectives are secondary to the paycheck.  The mission of the project or the mission that the project supports is tangential to “just another day at the factory”. 

 

Figure5  A Paycheck Programmer[4]

Figure 5 depicts a paycheck programmer who always chooses the pay over the mission.  Of course, this is not often a conscious choice and that is actually the most dangerous aspect of this weakness.  Operating as a “default setting” to choose expediency over quality is part and parcel of the human condition.  It is a perennial temptation that we all must deal with.  As a veteran of the Military, I have always operated under a strong sense of mission at the expense of personal comfort.  The military works hard at instilling this characteristic into the profession. That, of course, begs the question of just what is a profession, what is a professional and how does your definition of “laziness” affect that?

A profession[5] is an occupation that requires specialized skills and practices, an established body of knowledge, and a community of practitioners to enforce the standards and ethics in the field.  A profession usually emerges over time as a specialized trade craft evolves into a profession.  It takes time for a group of practitioners to understand their craft, codify its best practices and then promulgate those practices to its adherents.  Software development, also known as Software Engineering in some circles, has been evolving and maturing since the inception of widely adopted “high-level”[6] programming languages in the 1950’s.  Given that as a starting point, the trade craft of building computer programs has been evolving and maturing for seventy years.  Many practitioners, including myself, believe that our body of knowledge, practices and ethics have matured to the point where we are ready to move from a craft to a profession.  Right now, this is a question that concerns every software developer and begs the question: “Are we ready to become Software Engineering Professionals?”  Are you ready?  Should we be ready, as a group of practitioners, to take the next step?

In 2011, Marc Andreesen penned an op-ed in the Wall Street Journal entitled “Why Software is Eating the World”[7]. The article highlighted a new reality in the economics of business whereby all companies have become increasingly reliant on the software that runs their business.  For example, car companies are now becoming software companies because so many of the car’s components are controlled by software.  Many industries are becoming disrupted by software like publishing, telecommunications, retail stores, banking, movies and cable companies.  What does this mean for software developers?  The software you create is affecting more and more people, which raises the urgency of our industry transforming from a trade craft into a profession.  The practitioners of a profession are expected to act like a “professional” at all times.

Professionalism is a defined code of conduct by the practitioners of a profession.  For software engineers, the most well known code of ethics and conduct is the one developed by the Association of Computing Machinery (ACM) and reprinted as Appendix 1 (in accordance with their permission statement).  Simply stated, I believe that laziness as defined and demonstrated in Chapter 3 violates section 2.1 of our Professional Responsibilities which states: “Strive to achieve high quality in both the processes and products of professional work.”  So, to put this bluntly, lazy programmers lack professionalism.

Team leaders need to be keenly aware of this.  My plea to those who lead software teams is this: “Don’t turn your team into paycheck programmers by promulgating ‘laziness as a virtue’”.  Instead, stress that you have to be willing to struggle through difficult things.  Struggle with design. Struggle with the architectural issues.  Struggle through troubleshooting and debugging.  The hard reality that we accept is this: quality code is not supposed to be easy.

In the end, supporting the mission of our end-users is the most important thing.  Crafting code that is useful.  Getting the code into hands of the users to better their lives.  Code that performs the purpose it was designed for and performs it well. As professionals, we take pride in our performance, our software products, and their role in shaping the future.



[1]“Big Ball of Mud”, paper by Brian Foote and Joseph Yoder, 1997. https://joeyoder.com/PDFs/mud.pdf

[2]“Go-to statement considered harmful”, Edsger W. Dijkstra; 1968.

[3]Image from Bishnu Sarangi on Pixabay.com and free for commercial use.  https://pixabay.com/photos/bridge-collapse-damage-312873/

[4]Image by mohamed_hassan from Pixabay and free for commercial use. https://pixabay.com/illustrations/risk-money-cliff-chasing-run-4423433/

[5]https://en.wikipedia.org/wiki/Profession

[6]https://en.wikipedia.org/wiki/History_of_programming_languages

[7]https://a16z.com/2011/08/20/why-software-is-eating-the-world/