tag:blogger.com,1999:blog-31847430508312575532024-03-21T06:46:29.374-07:00programming opinionsJim Xochellishttp://www.blogger.com/profile/17636740187356874597noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-3184743050831257553.post-59605389073713333952010-01-28T20:40:00.000-08:002010-01-31T05:25:10.725-08:00The impact of the Pareto principle in optimization<br/>
<h3>Preface</h3>
<p>Although the <em>Pareto principle</em> is frequently mentioned in software optimization discussions, <a href="#References">[2]</a> the way this principle affects the optimization process is usually left obscure. Hence, I considered interesting to devote this brief discussion to the impact of the <em>Pareto principle</em> in software optimization.
</p>
<h3>Basic theory and practice</h3>
<p>The <em>Pareto principle</em> <a href="#References">[1]</a> generally states that <em>roughly 80% of the effects come from 20% of the causes</em> and is hence also known as the <em>80-20 rule</em>. When applied in software optimization, <a href="#References">[2]</a> this principle denotes that 80% of the resources are typically used by 20% of the operations. More specifically, regarding the execution speed of a software entity, the <em>Pareto principle</em> denotes that 80% of the execution time is usually spent executing no more than 20% of the code.</p>
<p>The validity of the <em>Pareto principle</em> in software optimization is practically indisputable. Anyone who has even a limited experience in optimization, will acknowledge that responsible for most of the system resources consumption is almost always a very small percentage of the overall code. The <em>Pareto principle</em> applies so well in speed optimization, that there are even cases in which almost 90% of the execution time is spend executing only 10% of the code. (90-10 rule) Furthermore, we will see in the next paragraphs, that some well-known <em>optimization rules and practices</em>, which are frequently mentioned in relative discussions, are in fact consequences of this important principle.
</p>
<p>However, it is important to clarify that the <em>Pareto principle</em> is essentially a widely accepted <em>rule of thumb</em>, <a href="#References">[1,9]</a> which is not expected to apply in exactly the same way or same degree in all situations. In the context of this article, it is considered that the <em>Pareto principle</em> is in force, when <em>no more than 20%</em> of the operations consume <em>at least 80%</em> of the resources. For example, in case 93% of the resources are used by 16% of the operations, this still is acknowledged as a situation in which the <em>Pareto principle</em> applies well. (Please note that there is no need for the percentages of the code and the resources to add up to 100%, since they are measures of different things.) Furthermore, sometimes a particular malfunction or performance bug may lead to a situation in which a couple of code-lines consume almost all the system resources. These cases are out of the scope of this discussion, which is mostly about the performance improvement of software entities that already function reasonably well.
</p>
<h3>The positive impact of the Pareto principle</h3>
<p>The obvious outcome of the <em>Pareto principle</em> is that, all parts of the implementation code in a typical software entity are not equally responsible for the system resources consumption and only a small portion (10%-20%) of the overall code is actually performance critical. However, the full power of this principle cannot be revealed unless we study further its three most notable consequences, which are frequently regarded as <em>optimization rules and practices</em>: <a href="#References">[5,6,7]</a> (see also <em>figure 1</em>)
</p>
<ul>
<li><strong>It is a good practice to <em>profile</em> <a href="#References">[3]</a> before optimizing.</strong> <a href="#References">[6]</a>
<p>According to the <em>Pareto principle</em>, most of the implementation code is usually almost irrelevant to the overall software performance, except of some small code portions, (10%-20%) which consume most (80%-90%) of the system resources. Hence, it is very important to effectively locate these critical code portions and concentrate all our optimization efforts on them. Optimizing non-critical code is not only a waste of time, but will probably reduce the stability and maintainability of our product as well. Consequently, it is very beneficial to <em>profile</em> our code first and judiciously optimize only the code portions which have been proven performance critical.</p></li>
<li><strong>It is often preferable to start optimizing when the implementation is complete and functional.</strong> <a href="#References">[5]</a>
<p>It is actually much easier to make accurate performance measurements and effectively locate the performance bottlenecks, <a href="#References">[4]</a> when the implementation is complete and functional. Thanks to the <em>Pareto principle</em>, the critical code is usually relatively small in size, hence a limited rewrite of the bottlenecks is not expected to cost as much as prematurely optimizing a much larger portion of code. This particular practice is also known as: <em>Make it work first, optimize later</em>.</p></li>
<li><strong>Well-designed code is usually much easier to optimize.</strong>
<p>A good software design will help us to both locate the performance bottlenecks and also improve small portions of code, without affecting the rest of the program. On the other hand, poor software design will probably reduce the positive impact of the <em>Pareto principle</em>, by increasing the undesirable side-effects of the performance modifications and will eventually make the optimization process disproportionally difficult, in relation to the relatively small size of the critical code. In the words of Martin Fowler: <em>"Well-factored software is easier to tune"</em>. <a href="#References">[7]</a></p></li>
</ul>
<table align="center"><tbody>
<tr><td>
<img alt="Pareto consequences" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpaLMjpCQqbVs6V7xaNYde_ySEU9fL7pX_W1RP6InX6Ax4gV61P11Zw12VIZiqUKVQhBBjJPw7L8IzJ7DjVxNYKgtnRxRD32XroTS-6hmF-RjCvCuXCP0LCxwuiySHUQt8Ly_aqBHq2BY/s1600/Consequences.png"/>
</td></tr>
<tr><td align="center"><strong>Figure 1</strong>: Pareto consequences</td></tr>
</tbody></table>
<p>The fact alone that the <em>Pareto principle</em> enables and supports the above fundamental <em>optimization rules and practices</em>, is already quite remarkable. However, the overall impact of the <em>Pareto principle</em> goes far and beyond the above tree consequences. Thanks to the validity of this principle, it is possible to design software solutions, without having the performance considerations and restrictions constantly in our minds. It is also possible, for the software designers and developers, to often favor <em>clarity</em>, <em>flexibility</em>, <em>simplicity</em>, <em>maintainability</em>, <em>reusability</em> and other important <em>qualities</em>, over the <em>performance</em> and <em>efficiency</em>. Consequently, thanks to this important principle, both the complexity and the cost of producing quality software have been significantly moderated.</p>
<p>Finally, last but not least, the <em>Pareto principle</em> and its consequences can provide a very good defence against the <em>premature optimization</em>, <a href="#References">[2,8]</a> which is a particularly bad habit of the software developers who care a lot about performance. A good understanding of this principle virtually eliminates any temptation for optimizing too early, or unnecessarily optimizing non-critical parts of code.
</p>
<h3>Misconceptions and restraining factors</h3>
<p>It is quite natural to expect, that a principle as powerful and indisputable as the <em>Pareto principle</em>, will probably cause some <em>exaggerations</em> and <em>misconceptions</em>, along with its positive impact. More particularly, there are at least one common exaggeration and two common misconceptions, which seem to be related to the <em>Pareto principle</em>:
</p>
<table border = "0"><tbody>
<tr>
<td><strong>Exaggeration:</strong></td>
<td>It is always easy to optimize a complete implementation.</td>
</tr>
<tr>
<td><strong>Misconception 1:</strong></td>
<td>There is no need at all to care about performance during the development.</td>
</tr>
<tr>
<td><strong>Misconception 2:</strong></td>
<td>Designing for performance is completely useless.</td>
</tr>
</tbody></table>
<p>The logical path which leads to the above false conceptions is quite short and simple. (see also <em>figure 2</em>) Since the <em>Pareto principle</em> applies generally well in optimization, it is practically guaranteed that almost always a small percentage of the overall code (10%-20%) will be responsible for most of the system resources consumption. Because of the small size of the performance critical code, it is tempting to assume that it will be always convenient and effective to optimize a software entity, after it has been fully implemented. This in turn, will lead to the false conclusion that, there is no need at all to <em>care about performance</em> when implementing the software and <em>designing for performance</em> is completely useless!
</p>
<table align="center"><tbody>
<tr><td>
<img alt="Pareto misconceptions" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbR0wL5wQy6p3IO97VQo6TsAlSb4Nz-k6KUMGb2uEa08PidB-_9FI3J9ZosX9gf5sotUBXzIXkk9isCEs2D31pMnmTHd73iXGoK5Ao2usCgQPK6N-1zrU9DLnftbRXnKQ-sYfrQuTjsxI/s1600/Misconceptions.png"/>
</td></tr>
<tr><td align="center"><strong>Figure 2</strong>: Common Pareto exaggerations and misconceptions</td></tr>
</tbody></table>
<p>The major logical fault which enables the above inaccurate conceptions, is the assumption that the optimization of a fully implemented software entity will be always easy and effective, just because the <em>Pareto principle</em> applies. But in fact the reality can be much more complicated. In a way we can say that the impact of the <em>Pareto principle</em> in optimization resembles the impact of the money in our life. Money can help us improve many things in our life, but cannot guarantee us a better overall quality of life. Likewise, although the <em>Pareto principle</em> facilitates a lot the optimization at the end of the software development cycle, this is not the same as guarantying an easy and successful optimization process. It is important to understand that, the <em>Pareto principle gives us no guaranties regarding the effort required for improving the software performance</em>, it just states that the performance critical code is relatively small in size and <em>nothing more</em>.
</p>
<p>But what can actually go wrong and make the optimization process really difficult, when we know for a fact that the performance critical code is small in size? Unfortunately in practice, there are several <em>restraining factors</em>, which may reduce the positive impact of the <em>Pareto principle</em> and can make the optimization process disproportionally difficult, in relation to the relatively small size of the critical code. To obtain a more concrete and practical understanding of these restraining factors, it can serve us well to discuss here some indicative examples:
</p>
<ul>
<li><strong>The performance critical code already performs too well to be significantly improved.</strong>
<p>When the critical code already performs well, we should concentrate our efforts on reducing its use, instead of actually improving its performance. This sometimes requires difficult design changes or even architectural changes, which in turn increase the complexity and the side-effects of the optimization very much. Consequently, when the critical code already performs well, the effort required for improving the software performance, may not be proportional to the size of the critical code and the positive impact of the <em>Pareto principle</em> is expected to be weakened.</p></li>
<li><strong>The performance critical code, even if it is small in size, may be widely scattered.</strong>
<p>When the critical code is distributed in many places, solving the performance problems will probably require a big number of changes, and may produce a lot of side-effects and increase instability. Consequently, the existence of scattered performance bottlenecks, which can be particularly frequent in poorly designed software entities, usually moderates the positive impact of the <em>Pareto principle</em>, by making the optimization disproportionally difficult in relation to the size of the critical code.</p></li>
<li><strong>The improvement of the performance critical code, may cause side-effects in much larger code portions.</strong>
<p>The undesirable and dangerous side-effects, which can be caused by the performance improvements, are the main reason why we have already discussed that well-designed code is easier to optimize. However, in the real word, software design is often imperfect and unexpectedly severe side-effects are not uncommon during the optimization. Obviously, this is yet another case in which, the optimization is disproportionally difficult in relation to the size of the critical code. Consequently, we should consider the weak design as one more factor, which can reduce the positive impact of the <em>Pareto principle</em>.</p></li>
<li><strong>The overall software performance is too poor to become acceptable by merely improving small portions of code.</strong>
<p>When the <em>80-20 rule</em> applies, it is mathematically impossible to make the overall performance more than <em>five times better</em>, by merely improving the critical 20% of the code. Likewise, when the <em>90-10 rule</em> applies, the best we can possibly do by improving only the critical 10% of the code, is to make the overall performance up to <em>ten times better</em>. These limitations may seem very relaxed at first glance, but in practice the improvements which can be easily achieved, will usually be considerably smaller than the above theoretical best cases. Hence, if we neglect the performance issues too much when building the software, it is quite possible to experience situations in which, optimizing the most critical 10-20% of the code will not be enough to compensate for the overall poor performance. Of course, we can continue optimizing the less critical code as well, but this will probably force us to rewrite a big part of our already complete and functional software, which in turn increases the difficulty of the optimization process too much and will almost eliminate the positive impact of the <em>Pareto principle</em>. Speaking metaphorically, <em>if you have originally designed an elephant, it will be extremely difficult to optimize it into a cheetah at the final stages of its implementation!</em></p></li>
</ul>
<p>The above examples clearly demonstrate that several <em>restraining factors</em> can prevent the <em>Pareto principle</em> from guaranteeing an easy and effective optimization of a fully implemented software. In other words, it is not reasonable to assume that any software, regardless of its initial state, can easily achieve acceptable performance just because the <em>Pareto principle</em> applies. Since the <em>Pareto principle</em> cannot always guarantee an easy and successful optimization process, this principle is not a good enough reason for the developers to stop <em>caring about performance</em> while building their software. Furthermore, in some performance critical applications, <em>designing for performance</em> may be the only effective way to achieve demanding performance goals.
</p>
<h3>Conclusion</h3>
<p>The <em>Pareto principle</em> has a great positive impact in software optimization, which the software developers should take advantage of. However, like most things in life, the positive impact of the <em>Pareto principle</em> has its limits, which the software developers should also acknowledge and respect. The <em>Pareto principle</em> is a very good reason for avoiding the <em>premature optimization</em> and, at the same time, a really bad excuse for neglecting the software performance during the design and the implementation phases of the software development. Generally, when it comes to optimization, overestimating this important principle can be at least as wrong as ignoring or underestimating it.
</p>
<p>Needless to say that the solid understanding of the <em>Pareto principle</em> is essential for software developers who deal with optimization tasks. Being able to take advantage of the positive consequences of the <em>Pareto principle</em>, while avoiding the dangerous misconceptions which surround this principle, is a very useful optimization skill, which can be significantly improved with experience and practice. However, it is not uncommon even for experienced developers, to underestimate the limitations of the <em>Pareto principle</em>. Unfortunately, the fact that these limitations can be very inconvenient, makes them also difficult to be acknowledged by the software developers, who are often prone to underestimate and overlook whatever seems inconvenient to them.
</p>
<a name="References"></a><h3>References</h3>
<ol>
<li>Wikipedia: Pareto principle<br/>
<a href="http://en.wikipedia.org/wiki/Pareto_principle">
http://en.wikipedia.org/wiki/Pareto_principle</a></li>
<li>Wikipedia: Program optimization<br/>
<a href="http://en.wikipedia.org/wiki/Optimization_(computer_science)">
http://en.wikipedia.org/wiki/Optimization_(computer_science)</a></li>
<li>Wikipedia: Profiling<br/>
<a href="http://en.wikipedia.org/wiki/Profiling_(computer_programming)">
http://en.wikipedia.org/wiki/Profiling_(computer_programming)</a></li>
<li>Wikipedia: Performance bottlenecks<br/>
<a href="http://en.wikipedia.org/wiki/Program_optimization#Bottlenecks">
http://en.wikipedia.org/wiki/Program_optimization#Bottlenecks</a></li>
<li>Optimize Later<br/>
<a href="http://c2.com/cgi/wiki?OptimizeLater">
http://c2.com/cgi/wiki?OptimizeLater</a></li>
<li>Profile Before Optimizing<br/>
<a href="http://c2.com/cgi/wiki?ProfileBeforeOptimizing">
http://c2.com/cgi/wiki?ProfileBeforeOptimizing</a></li>
<li>Tuning Performance and Process: Creating Tunable Software<br/>
<a href="http://www.artima.com/intv/tunableP.html">
http://www.artima.com/intv/tunableP.html</a></li>
<li>Tuning Performance and Process: Creating Tunable Software<br/>
<a href="http://c2.com/cgi/wiki?PrematureOptimization">
http://c2.com/cgi/wiki?PrematureOptimization</a></li>
<li>Wikipedia: Rule of thumb<br/>
<a href="http://en.wikipedia.org/wiki/Rule_of_thumb">
http://en.wikipedia.org/wiki/Rule_of_thumb</a></li>
<li>My blog posts on Codeproject<br/>
<a href="http://www.codeproject.com/script/Articles/BlogArticleList.aspx?amid=54927" rel="tag">CodeProject</a></li>
</ol>Jim Xochellishttp://www.blogger.com/profile/17636740187356874597noreply@blogger.com1tag:blogger.com,1999:blog-3184743050831257553.post-84191580417929187232009-07-28T13:04:00.000-07:002009-12-21T10:57:47.788-08:00Singletons can be dangerous<br>
<h3>A dangerous pattern</h3>
<p>Even a shallow internet research can quickly indicate the existence of many popular misconceptions regarding the <em>singletons</em>, <a href="#Links">[A1,A2,A3]</a> which can easily mislead the software developers to inappropriate use of this pattern. Since the <em>singleton</em> pattern has several disadvantages and side-effects, misusing it creates much more problems than it is supposed to solve. Hence, before starting to use <em>singletons</em> in your code, it will be quite beneficial to study first some of the <a href="#Links">links</a> provided at the end of this text, in order to fully understand when it is appropriate to use this pattern and be aware of the consequences of its usage. To give you a preview of what you are going to find out, I briefly present here some common misconceptions regarding the <em>singletons</em> and also a typical example of an inappropriate <em>singleton</em> usage.
</p>
<h3>Common misconceptions</h3>
<ul>
<li>The authors of the notable <em>"Design Patterns"</em> book <a href="#Links">[A1]</a> recommend the use of <em>singletons</em>. (No they don't, <a href="#Links">[B6]</a> they just present the <em>singleton</em> pattern in their book.)</li>
<li>When only one instance of a particular object is needed, then it is a good idea to implement this object as a <em>singleton</em>. (Not good enough reason. <a href="#Links">[A3]</a>)</li>
<li><em>Singletons</em> do not share the same problems that global variables have. (They actually share most of them. <a href="#Links">[A3]</a>)</li>
</ul>
<p>In case any of the above misconceptions does not seam obvious to you, you definitely need to do some more <a href="#Links">reading</a>, in order to be more familiar with the disadvantages and side-effects of the <em>singleton</em> pattern.
</p>
<h3>Example of inappropriate usage</h3>
<p>A typical example of an inappropriate <em>singleton</em> is a <em>singleton</em>, which is merely being used by a single class or function. There are at least two good reasons for not using this pattern in such cases:
</p>
<ul>
<li><em>Singletons</em> are essentially globals. Hence in case only a single class or function is using a particular <em>singleton</em>, then this <em>singleton</em> object has much larger scope and lifetime than its actual usage requires. This is obviously a bad practice, even by the standards of the old fashion <em>structured programming</em>. In other words, its not wise to use a global, when a local variable is sufficient enough.</li>
<li><em>Singletons</em> have been designed to work well in cases when <em>their potential creators, owners and clients are many in number and also hard to predict</em>. In such cases the use of a <em>singleton</em> effectively eliminates most of the problems caused by the nondeterministic creation and ownership. On the other hand if the creator, owner and client is just one single class or function, the use of a <em>singleton</em> is definitely an overkill!</li>
</ul>
<p>Consequently, this particular <em>singleton</em> usage is virtually minimizing the advantages of the pattern, while maximizing its drawbacks at the same time!
</p>
<a name="Links"></a><h3>Basic reading</h3>
<ul>
<li><strong>A1.</strong> <em>Design Patterns (book)</em><br>
<a href="http://en.wikipedia.org/wiki/Design_Patterns_(book)">
http://en.wikipedia.org/wiki/Design_Patterns_(book)</a></li>
<li><strong>A2.</strong> <em>Wikipedia: Singleton pattern.</em><br>
<a href="http://en.wikipedia.org/wiki/Singleton_pattern">
http://en.wikipedia.org/wiki/Singleton_pattern</a></li>
<li><strong>A3.</strong> <em>Singleton Design Pattern.</em><br>
<a href="http://sourcemaking.com/design_patterns/singleton">
http://sourcemaking.com/design_patterns/singleton</a></li>
<li><strong>A4.</strong> <em>The One: A Singleton Discussion.</em><br>
<a href="http://www.gamedev.net/reference/articles/article1825.asp">
http://www.gamedev.net/reference/articles/article1825.asp</a></li>
</ul>
<h3>Further reading</h3>
<ul>
<li><strong>B1.</strong> <em>GPWiki: Singleton pattern.</em><br>
<a href="http://gpwiki.org/index.php/Singleton_pattern">
http://gpwiki.org/index.php/Singleton_pattern</a></li>
<li><strong>B2.</strong> <em>Performant Singletons.</em><br>
<a href="http://scientificninja.com/advice/performant-singletons">
http://scientificninja.com/advice/performant-singletons</a></li>
<li><strong>B3.</strong> <em>Singletons are Pathological Liars.</em><br>
<a href="http://misko.hevery.com/2008/08/17/singletons-are-pathological-liars/">
http://misko.hevery.com/2008/08/17/singletons-are-pathological-liars/</a></li>
<li><strong>B4.</strong> <em>Where Have All the Singletons Gone?</em><br>
<a href="http://misko.hevery.com/2008/08/21/where-have-all-the-singletons-gone/">
http://misko.hevery.com/2008/08/21/where-have-all-the-singletons-gone/</a></li>
<li><strong>B5.</strong> <em>Root Cause of Singletons.</em><br>
<a href="http://misko.hevery.com/2008/08/25/root-cause-of-singletons/">
http://misko.hevery.com/2008/08/25/root-cause-of-singletons/</a></li>
<li><strong>B6.</strong> <em>Design Patterns 15 Years Later: An Interview with Erich Gamma, Richard Helm, and Ralph Johnson</em><br>
<a href="http://www.informit.com/articles/article.aspx?p=1404056">
httphttp://www.informit.com/articles/article.aspx?p=1404056</a></li>
</ul>Jim Xochellishttp://www.blogger.com/profile/17636740187356874597noreply@blogger.com0tag:blogger.com,1999:blog-3184743050831257553.post-65548798195950871942009-05-17T10:17:00.000-07:002009-12-21T11:03:42.667-08:00Debugger limitations<br>
<h3>Preface</h3>
<p><em>Source-level debuggers</em> <a href="#References">[1]</a> (hereafter also referred to simply as <em>debuggers</em>) are very useful and powerful tools, but just like any other tool, they can also be misused, overused and overrated. According to my experience, some software developers (mostly the novice ones) tend to occasionally overestimate the capabilities of the modern debuggers and neglect to consider other alternatives, usually because they forget or even ignore the <em>limitations</em> of these powerful tools. Hence, I have found it a good idea to present here an indicative collection of debugger limitations, which I have discovered one-by-one during my own debugging experiences. Although I mostly work in C/C++, chances are that many of these limitations will probably apply on most source-level debuggers, regardless of the programming language.
</p>
<h3>Limitations</h3>
<p>Modern debuggers <a href="#References">[1]</a> have proved to be extremely helpful tools and invaluable time savers for most of the developers nowadays. However, the modern debuggers still have several <em>limitations</em>, which make them ineffective in some rare but difficult debugging situations. <a href="#References">[2]</a> Most of these limitations are in fact <em>preconditions</em>, which are required in order to use the debuggers successfully, like the ones included in the following non-exhaustive, but nevertheless indicative list:
</p>
<ol>
<li>The problem should be reproducible in a fairly short amount of time.</li>
<li>The problem should not affect the functionality of the debugger itself.</li>
<li>The use of the debugger should not affect the behavior of the target problem.</li>
<li>The problem should be reproducible in a build configuration, which allows the debugger to work effectively.</li>
<li>The problem should be reproducible in a software environment, which allows the debugger to work effectively.</li>
<li>The hardware, in which the problem is reproducible, should be accessible by a local or remote debugger.</li>
<li>In some cases, the user of the defective program should be willing to help reproducing the problem, sometimes by investing a lot of time and effort.</li>
<li>In some cases, the user of the defective program should be willing to provide all the means and rights required for the remote debugging to function.</li>
</ol>
<p>Since I am having the feeling that the above list of preconditions might seam a bit too theoretical, I am also adding bellow some concrete examples of how exactly these preconditions might be violated:
</p>
<ul>
<li>Some years ago, I was hunting a very difficult bug, so rare that clearly violated the <em>precondition-1</em>. Furthermore this problem manifested itself only when the program was wrapped inside a security shell of a well-known copy protection system. This shell was implementing a variety of security measures and among them a strong anti-debugging defence mechanism, hence I also had to confront with a violation of the <em>precondition-5</em>. Last but not least, the result of the bug was a blue-screen situation in Windows-XP, which obviously violated the <em>precondition-2</em>. Needless to say, that any attempt to use a debugger for investigating a bug like this, is undoubtedly a pointless waste of time.</li>
<li>In general, when the problem is causing hangs, crashes or freezes to the operating system itself, the debugger is unlikely to be very helpful. (Violation of the <em>precondition-2</em>.) Fortunately, modern operating systems are very stable and hence these cases are rather rare nowadays.</li>
<li>When hunting bugs in timer events, other sensitive event handling code, or frequently called drawing code, the interactive features of the source-level debuggers (breakpoints, watchpoints, etc.) tend to become inconvenient and hard to use. Although the use of a debugger is often feasible in most of these cases, using logging instead is usually much more practical and effective. (Conflict with the <em>precondition-2</em>.)</li>
<li>When the defective program, is using the computer display exclusively by itself (some direct-X applications do that), the use of a debugger can be often impractical and even impossible in some rare situations. (Violation of the <em>precondition-2</em>.) The situation can be improved a lot with the use of logging or remote debugging.</li>
<li>Developers who are hunting transient performance bugs, or thread/process synchronization bugs, frequently find the presence of a source-level debugger and the use of breakpoints too intrusive. Ironically, when using a debugger in order to reveal such problems, we often end up hiding them even more and making them completely unreachable! (Violation of the <em>precondition-3</em>.)</li>
</ul>
<p>Apart from the above preconditions, the source-level debuggers also have, due to their interactive nature, some significant shortcomings in their information gathering capabilities. Although debuggers are very good at representing the current sequence of nested function-calls, also known as <em>call stack</em>, they are not particularly good at representing the sequence of actions or states during the execution time. Consequently, when hunting bugs that depend on a previous sequence of actions or states, a simple logging facility can often be more effective than the most powerful debugger. However, some modern debuggers have been recently equipped with the <em>tracepoint</em> <a href="#References">[2,3]</a> feature, providing logging capabilities, along with their excellent interactive facilities.
</p>
<h3>Conclusion</h3>
<p>The list of limitations presented in this text indicates that, in spite of the indisputable fact that the modern debuggers are extremely powerful and versatile, it is also true that in some circumstances their effectiveness might be considerably constrained. More practically, <u>if you notice a conflict with the above list, during the debugging, then it might be a good idea to abandon your source-level debugger and consider using alternative debugging techniques.</u> This also proves that, the knowledge of alternative or complementary debugging techniques is still very useful.
</p>
<a name="References"></a><h3>References</h3>
<ol>
<li><em>Debugger.</em><br>
<a href="http://en.wikipedia.org/wiki/Debugger">
http://en.wikipedia.org/wiki/Debugger</a></li>
<li><em>Unusual software bugs.</em><br>
<a href="http://en.wikipedia.org/wiki/Unusual_software_bug">
http://en.wikipedia.org/wiki/Unusual_software_bug</a></li>
<li><em>VSD tracepoints.</em><br>
<a href="http://msdn.microsoft.com/en-us/library/ktf38f66.aspx">
http://msdn.microsoft.com/en-us/library/ktf38f66.aspx</a></li>
<li><em>GDB tracepoints.</em><br>
<a href="http://developer.apple.com/documentation/developertools/gdb/gdb/gdb_11.html">
http://developer.apple.com/documentation/developertools/gdb/gdb/gdb_11.html</a></li>
</ol>Jim Xochellishttp://www.blogger.com/profile/17636740187356874597noreply@blogger.com0