Professional insights and DIY projects from 34 years in software engineering.
Software Development Engineer III at AWS โข Eclipse Foundation Contributor โข 3 Patents โข Pacific Northwest
Professional insights and DIY projects from 34 years in software engineering.
Software Development Engineer III at AWS โข Eclipse Foundation Contributor โข 3 Patents โข Pacific Northwest
Building a DIY propane pool heating system to heat a 4,400 gallon above ground pool from 68ยฐF to 85ยฐF using a tankless water heater and submersible pump.
Details about the Intex above ground pool setup including padding, dimensions, and skimmer installation for the DIY propane heating project.
Comprehensive shopping list for building a DIY propane pool heater, including tankless water heater, submersible pump, hoses, filters, and all necessary components with Amazon links.
Step-by-step instructions for assembling and testing the DIY propane pool heater system, including water connections, propane setup, and safety testing procedures.
Daily operation procedures and maintenance schedule for the DIY propane pool heater system, including startup/shutdown procedures and troubleshooting tips.
Unexpected wildlife visitors to our DIY pool heater setup, including raccoon encounters that led to equipment damage and important lessons about protecting outdoor equipment.
I see human code reviews as one tool in the quality toolbox. My opinion is that to keep code reviews interesting and engaging, humans should be the last link in the chain and get the most interesting problems. What I mean is that if the code review is burdened with pointing out that an opened resource was not closed or that a specific path through the code will never happen, code reviews become draining and boring. I also believe that code reviews need to scale up to teams that are not co-located. That might mean using an asynchronous process, like a workflow system or using collaboration tools to do the code review through teleconferences and screen sharing. A workflow system can prevent code from promotion into the mainline build until one or more reviewers have accepted it. To keep the code reviews interesting and challenging, I give the grunt work to the machines and use static analysis and profiling tools first. Before you can involve the humans, your code needs to pass the suite of static analysis tests at the prescribed level. This will weed out all the typical mistakes that are larger than what a compiler finds. There are many analysis and profiling tools available in open source and commercially. Most of my development work is in server-side Java, and my analysis tools of choice are FindBugs, PMD and the profiling tool in Rational Software Architect. FindBugs is a byte code analyzer, so it looks at what the Java compiler produces and is less concerned with the form of source code. PMD analyzes source code. Both tools have configurable thresholds for problem severity and they can accept custom problem patterns. PMD has a big library of problem patterns, including things like overly complex or long functions or methods. The RSA profiling tool only tests timing down to the method level of classes. It can quickly help a developer focus on where the sluggish parts of a system are hiding, which is valuable information going into a review. Once the code makes it through this array of automated tests, bring the humans in to look at it and get their input. I have found this approach in our case changes the review from a potentially adversarial situation into one with an educational tone. The review meeting, if it happens synchronously, is not overtaken by the small problems and pointing out basic mistakes. It is concerned with making recommendations at a higher level to improve the larger design. FindBugs, U. of Maryland, http://findbugs.sourceforge.net/ PMD, SourceForge, http://pmd.sourceforge.net/ Rational Software Architect for WebSphere Software, http://www-01.ibm.com/software/awdtools/swarchitect/websphere/ ...
Here are links to my official AWS certification records. AWS Certified Developer - Associate AWS Certified Architect - Associate
Eventually, this unique ID will be indexed by the various search engines on the Internet. My dog is not lost, but I see this as a form of insurance in case he ever does go on walkabout without my permission.
The proximity sensor problem with iPhone 4 is a topic of much debate on discussion boards, blogs and news sites. The proximity sensor is used by the phone to determine if the user is holding the phone to her ear during a call. The phone uses input from the proximity sensor to decide whether to activate the screen and allow touch input. Many owners of the phone have reported the screen re-enabling while holding the phone to their ear during a call, while others have reported no problems. I am one of the unfortunate owners of the phone that has inadvertently placed a caller on hold or interrupted other callers with touch tones emanating from my end of the call. As of today I am on my second iPhone 4 and disappointed to report my experience has not improved. There are plenty of emotional calls for Apple to quickly address this problem. I want to take a different approach. In this essay, I will provide a discussion about testing approaches and what that means for complex systems. I use the proximity sensor as a real-world example to demonstrate the problem many have experienced and the difficulty involved in testing for it. Inside iPhone is a complex hardware system arranged in a hierarchy of command and control: a microprocessor, memory, storage, transceivers for wi-fi, cellular, and bluetooth networks. It has touch, light, sound and proximity sensor input. It has external interfaces for the dock, a headset, the SIM card. It has a single display integrated with the touch sensor input. The software distributed through these components is a system of collaborating state machines, each one working continuously to keep the outside world pleased with the experience of interfacing with the phone. It is not just a single human the iPhone must keep satisfied. The cellular networks, wi-fi access points, bluetooth devices, iTunes and other external systems are part of this interactive picture as well. This is oversimplified, but you can begin to appreciate the enormous burden of testing such a small, complex device used by millions of people. How does a team even start to tackle such a problem? Meyer (2008) presents seven principles in the planning, creation, execution, analysis and assessment of a testing regimen. Meyer writes, above and beyond any other reason for the testing process โis to uncover faults by triggering failures.โ The more failures are triggered and fixed before delivery of a product to the end user, the less expensive it will be than to fix them later. Humans are a required yet flawed variable in the planning and execution of test suites for complex systems like iPhone. Identifying all possible triggers for failure can be nearly impossible. Savor (2008) argues that, โThe number of invariants to consider [in test design] is typically beyond the comprehension of a human for a practical system.โ How do we test the multitude of scenarios and their variations in complex systems without fully comprehending usage patterns and subtle timing requirements for failure in advance? Meyer (2008) argues that testing time can be more important a criteria than absolute number of tests. When combining time with random testing, also called test escapes, there is a possibility of uncovering more faults than just using a huge, fixed suite of tests continuously repeated without deviation. Test escapes as defined by Chernak (2001) are defects that the fixed testing suite was not able to find, but instead found later by chance, an unassociated test, or by an end-user after the project was delivered to production (e.g. introduction of randomness). Now that we have some background information and terminology, letโs design a test that could make iPhoneโs proximity sensor fail to behave correctly. Consider an obvious test case for the proximity sensor: Initiate or accept a call.Hold the phone against ear. Expect the screen to turn off and disable touch input.Hold the phone away from ear. Expect the screen to turn on and enable touch input.End call. This test case can be verified in a few seconds. Do you see a problem with it? It is a valid test, but not a terribly realistic one. The problem with this test case is that it does not reflect what really happens during a call. We do not sit frozen with all of our joints locked into place, refusing to move until the call has completed. To improve the test case, we add some physical action during the call: Initiate or accept a call.Hold the phone against ear. Expect the screen to turn off and disable touch input.Keep the phone still for 30 seconds.Change rotation, angle and distance of phone to ear while never exceeding 0.25 inches from the side of the callerโs head. Expect the screen to remain off and touch input remain disabled.Return to step 3 if call length is less than ten minutes.Hold the phone away from ear. Expect the screen to turn on and enable touch input.End call. Now the test case is reflecting more reality. There are still some problems with it. When I am on a call, I often transfer the phone between ears. Holding a phone to the same ear for a long time gets uncomfortable. During lulls in the conversation, I pull the phone away from my ear to check the battery and signal levels, and then I bring it back to my ear. These two actions need to be added to the test case. Additionally, all of our timing in the test case is fixed. Because of the complex nature of the phone, small variations in timing anywhere can have an impact in successful completion of our test case. Introducing some variability to the test case may raise the chances of finding a failure. In other words, we will purposely create test escapes through random combinations of action and timing. Initiate or accept a call.Hold the phone against ear. Expect the screen to turn off and disable touch input.Keep the phone still for [A] seconds.Randomly choose step 5, 6 or 7:Change rotation, angle and distance of phone to ear while never exceeding 0.25 inches from the side of the callerโs head. Expect the screen to remain off and touch input remain disabled.Pull phone away from ear for [B] seconds and return phone to ear. Expect the screen to turn on and then off at the conclusion of the action.Move phone to opposite ear. Do no exceed [C] seconds during the transfer. Expect the screen to turn on during the transfer and then off at the conclusion of the transfer.Return to step 3 if call length is less than [D] minutes.Hold the phone away from ear. Expect the screen to turn on and enable touch input.End call. There are four variables to this test case. It is possible that certain combinations of [A], [B], [C] and [D] will cause the screen to re-enable during a call and cause the test case to fail. Have fun with this one. There are in fact combinations that induce proximity failure on iPhone 4 regardless of the version of iOS, including 4.1. Finally, an important part of test design is the inclusion of negative test cases. Chernak (2001) writes, โA test case is negative if it exercises abnormal conditions by using either invalid data input or the wrong user action.โ For a device like iPhone, tapping the screen constantly while it is disabled, making a call while holding it upside down, or using a faulty docking cable can all be considered negative test cases. Testing complex systems, regardless of physical size, is an incredibly difficult task. Some of this can be performed by humans and some through automated systems. Finding failures in highly integrated systems requires a combination of fixed test suites, test cases that reflect real usage scenarios, and the introduction of test escapes through creative randomization. References Chernak, Y. (2001). Validating and improving test case effectiveness. IEEE Software, January/February 2001. Meyer, B. (2008). Seven principles of software testing. Computer, August 2008. Savor, T. (2008). Testing feature-rich reactive systems. IEEE Software, July/August 2008. ...