# TODO: write a test for this question.

1 Answer

Chip was born into the programming world in the golden era of TDD and thus was presented with write-the-test-first design patterns in many tutorials and guides. A practice which chafed at him from early on.

Chip's main production app, filterbuildscheduler, has nearly complete system, model and job tests. At time of writing: 1,073 passing tests.

On the other end of the spectrum, Chip only has 462 tests in his other production app: liters_tracker. While the codebase is smaller, the test coverage is also smaller. The main part of the application is a forward-facing stats and story viewer. That part of the app has robust testing coverage. The back-end, data entry and reporting side that is only really touched by 3 people occasionally is lacking in some coverage.

Instead of writing tests before implementing code, Chip has found it much more comfortable and natural to inject a final review-everything-and-write-tests step into his development-deployment cycle. He has found that performing a final inspection that includes writing tests, cleaning up, fine tuning, and refactoring always leads him to discovering some hidden mistakes, conflicting ideas, unused methods, slow code, etc.

Chip focuses on writing model method tests, system tests, and tests for jobs or services. He rarely writes view and controller tests. Overall, he believes your testing practices (or even having tests at all) must prove themselves to add practical value in the quality of the process and end result, otherwise, why bother?

It's worth noting: Chip recognizes that owning the entire code base as a solo dev lends itself to various flexibilities that teams working together might not enjoy. First and foremost, Chip is a team player and will set aside his own testing philosophies to adopt the standards and preferences of the team and company.

If your interest is piqued, read on for a more nuanced delve into Chip's thinking on this matter.

While TDD seems to Chip like a good moral practice, akin to 6-month dental checks, it often ran afoul of how he approaches software development.
The practical use case for TDD is a piece of straight-forward, encapsulated code. When you know exactly what you are about to do, writing a test for it first is very achievable. 

I need a new method: Post#has_answers? 
1. Write a test:
  A. When a Post is not a question, it should return false
  B. When a Post is a question,
    1. And it has one or more associated answers, it should return true
2. Write the method
3. Ensure the test passes

But most of the time, Chip starts with a broader scope: "I want to retrieve, store, occasionally update and present my StackOverflow reputation and badges on this site."
When starting on this feature, there are a few paths Chip could take and he usually likes to start down one of those paths and see what he finds:

1. Retrieve reputation and badge info from SO profile
  A. Does SO have an open API?
  B. Does SO offer a JSON representation of a profile?
  C. If not A or B, we might have to get and parse the HTML of the SO profile
Starting with a specific test at this point is impossible. After doing some research, C was the answer.

We're still not to the level of specificity required of a good test. So Chip starts toying with ideas and trying some strategies. He eventually settles on using Net::HTTP and  Nokogiri in an ApplicationJob, but once he gets into the flow of tinkering and building, he rarely stops moving forward to go write failing tests first.

Which leads us to Chip's preferences (from his upcoming memoir):
  1. Don't test what Rails (or Ruby or a gem) does for us.
  2. Test behaviors, not implementation
  3. Write tests for what I struggled to get working in the first place
  4. Write most tests when I think the app is ready for first deployment
  5. Write tests for everything I add after the app is deployed

Hometown sweetheart Jason Swett uses car repair analogies in lots of his articles on Rails testing, so let me follow suite. 

Once a car rolls off the assembly line, when the company thinks it's complete, but before it goes to the consumer, it goes through a vigorous safety and function test.

Chip finds himself naturally following these general phases for app development:

  1. During scaffolding phases: some model tests, checking for constraints Chip added
  2. During build-out phases: hardly ever, Chip is trying various strategies and tinkering
  3. During detailed tuning and completion of specific features: write tests for features that feel duct taped together or in need of tuning or refactoring to make sure they meet the objective, a minimum 
  4. During final sprints: nope, just get the code into the files
  5. Before initial production deploy: yes, in fact, it's worth adding a whole new step: 6.
  6. App review: Go through every model and test every method. Does it do what I expect? Did my thinking change as the app grew and matured? Is this method still being used? Write a system test to ensure every route actually loads a page and that any critical interactivity on that page is tested (e.g. forms submit successfully and a record is created / changed). Write mailer and job tests to cover all the activity of the app.
  7. Deployment: add in and setup a continuous integration system.
  8. Features / improvements / bug fixes after deployment: yes, if any method, view, job, or mailer is added or changed.