<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Eyefodder &#187; Continuous Integration</title>
	<atom:link href="http://eyefodder.com/category/engineering/continuous-integration/feed" rel="self" type="application/rss+xml" />
	<link>http://eyefodder.com</link>
	<description></description>
	<lastBuildDate>Sat, 26 Aug 2017 21:12:17 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	
	<item>
		<title>Code Coverage — a simple Rails example</title>
		<link>http://eyefodder.com/2014/09/code-coverage-setup-rails.html</link>
		<comments>http://eyefodder.com/2014/09/code-coverage-setup-rails.html#comments</comments>
		<pubDate>Thu, 18 Sep 2014 13:13:40 +0000</pubDate>
		<dc:creator><![CDATA[Paul Barnes-Hoggett]]></dc:creator>
				<category><![CDATA[Agile Software Development]]></category>
		<category><![CDATA[Continuous Integration]]></category>
		<category><![CDATA[Engineering]]></category>
		<category><![CDATA[Quality Software]]></category>
		<category><![CDATA[Rails]]></category>
		<category><![CDATA[Ruby]]></category>
		<category><![CDATA[Software Craftsmanship]]></category>
		<category><![CDATA[Test Driven Development]]></category>

		<guid isPermaLink="false">http://eyefodder.com/?p=208</guid>
		<description><![CDATA[<p>My tests are my safety net. With them I can refactor with confidence, knowing that I&#8217;m keeping the functionality I intended. With them, I can grow my codebase, knowing that I&#8217;m not introducing regression errors. How do I have confidence that my safety net is good enough? One metric I can use to help with this [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://eyefodder.com/2014/09/code-coverage-setup-rails.html">Code Coverage — a simple Rails example</a> appeared first on <a rel="nofollow" href="http://eyefodder.com">Eyefodder</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>My tests are my safety net. With them I can refactor with confidence, knowing that I&#8217;m keeping the functionality I intended. With them, I can grow my codebase, knowing that I&#8217;m not introducing regression errors. How do I have confidence that my safety net is good enough? One metric I can use to help with this is code coverage. It answers the question “When I run my tests, how much of my application code executed?”. It&#8217;s a somewhat crude metric—telling me how broad the net is not how strong—but it’s a good place to start. Fortunately, setting it up on a rails project is pretty simple.</p>
<div id="attachment_217" style="width: 550px" class="wp-caption alignnone"><a href="http://upload.wikimedia.org/wikipedia/commons/3/35/Group_of_Circus_Performers_WDL10692.png"><img class="size-large wp-image-217" src="http://eyefodder.com/wp-content/uploads/2014/09/Group_of_Circus_Performers_WDL10692-1024x747.png" alt="circus perfomers in a safety net" width="540" height="393" /></a><p class="wp-caption-text">See how happy people are when they have a safety net?</p></div>
<p><span id="more-208"></span></p>
<h2>Getting Started</h2>
<p>I&#8217;ve made a simple example app that shows code coverage in action. Check out the source code from the <code>code_coverage</code> branch of my <a href="https://github.com/eyefodder/spex" target="_blank">spex</a> repository:</p>
<pre class="brush: bash; gutter: false; title: ; notranslate">

git clone -b code_coverage --single-branch https://github.com/eyefodder/spex.git

</pre>
<p>Now go to the <code>ops</code> directory and run <code>vagrant up</code> To get the virtual machine running. Next, let&#8217;s hop into the virtual machine and run the test suite:</p>
<pre class="brush: bash; gutter: false; title: ; notranslate">
vagrant ssh
...some output...
vagrant@spex:~$ cd /app
vagrant@spex:~$ rspec

</pre>
<p>Now, check out the reports folder. You&#8217;ll see that there is a <code>coverage/rcov</code> folder. Open the index file in the browser and you see an easy to digest code coverage report:<br />
<a href="http://eyefodder.com/wp-content/uploads/2014/09/Screen-Shot-2014-09-17-at-10.08.41-AM.png"><img class="alignnone size-large wp-image-219" src="http://eyefodder.com/wp-content/uploads/2014/09/Screen-Shot-2014-09-17-at-10.08.41-AM-1024x617.png" alt="code coverage report" width="540" height="325" /></a><br />
Pretty nifty huh? You can click on the rows in the table to see each class in more detail, and find out exactly which lines aren&#8217;t being executed:<br />
<a href="http://eyefodder.com/wp-content/uploads/2014/09/Screen-Shot-2014-09-17-at-10.15.28-AM.png"><img class="alignnone size-large wp-image-221" src="http://eyefodder.com/wp-content/uploads/2014/09/Screen-Shot-2014-09-17-at-10.15.28-AM-1024x826.png" alt="code coverage metrics for a single file" width="540" height="435" /></a><br />
Let&#8217;s take a look at how this was all set up&#8230;</p>
<h2>Code Coverage Gems</h2>
<p>First up, we need to add a couple of gems to the <code>Gemfile</code>:</p>
<pre class="brush: ruby; first-line: 59; title: ; notranslate">
  # code_coverage
  gem 'simplecov', :require =&gt; false
  gem 'simplecov-rcov', :require =&gt; false
</pre>
<p>Once we&#8217;ve run a <code>bundle install</code>, our next step is to configure our test suite to generate coverage reports.</p>
<h2>Configuring Code Coverage</h2>
<p>This, again is a pretty simple affair. We need to launch SimpleCov before any application code has run, so we have this code at the top of the <code>spec/spec_helper.rb</code></p>
<pre class="brush: ruby; title: ; notranslate">
if ENV['GENERATE_COVERAGE_REPORTS'] == 'true'
  require 'simplecov'
  require 'simplecov-rcov'
  SimpleCov.start 'rails' do
    coverage_dir ENV['CI_COVERAGE_REPORTS']
  end
  SimpleCov.formatter = SimpleCov::Formatter::RcovFormatter
end
</pre>
<p>There&#8217;s a few things happening here. We have a couple of environment variables that tell us if we should create reports: <code>GENERATE_COVERAGE_REPORTS</code> and if we do, where we should put them: <code>CI_COVERAGE_REPORTS</code>. If you&#8217;ve followed my earlier post on getting <a title="Getting Growl notifications from your Virtual Machine" href="http://eyefodder.com/2014/09/growl-guard-virtual-machine.html">Guard to send Growl</a> notifications, you will know to find these in <code>ops/dotfiles/guest_bash_profile</code> which is a profile automatically generated when we launch the virtual machine with <code>vagrant up</code>. If not, well, now you do!<br />
The next thing you&#8217;ll notice is the <code>SimpleCov.start 'rails'</code> call on line 4. This configures SimpleCov to have a profile that is good for most Rails applications. For example, the <code>spec</code> and <code>config</code> folders are excluded from coverage stats. You can read more about profiles <a href="https://github.com/colszowka/simplecov#profiles">here</a>.<br />
Finally, we tell SimpleCov that we want to format our results with the <code>SimpleCov::Formatter::RcovFormatter</code>. When we get to running our build as part of a <a href="http://en.wikipedia.org/wiki/Continuous_integration">continuous integration</a> process with <a href="http://jenkins-ci.org/">Jenkins</a>, we can use this format to parse results to be viewed in the dashboard.</p>
<h2>Viewing Code Coverage Reports generated on a Guest VM</h2>
<p>The last thing we have to deal with is the fact that the reports are generated on the guest virtual machine. In our <a title="Using Puppet and Vagrant to make a one-click development environment" href="http://eyefodder.com/2014/08/one-click-development-environment.html">existing setup</a>, we use <code>rsync</code> to push code changes from the host to the virtual machine. But this only works one way, and if we add content within the virtual machine you won&#8217;t see them on the host. We solve this with these lines in the <code>Vagrantfile</code></p>
<pre class="brush: ruby; first-line: 21; title: ; notranslate">
  config.vm.synced_folder '../reports', '/reports'
  config.vm.synced_folder &quot;../&quot;, &quot;/app&quot;, type: &quot;rsync&quot;, rsync__exclude: [&quot;.git/&quot;, &quot;ops/*&quot;, &quot;reports/&quot;, &quot;tmp/&quot;, &quot;log/&quot;, &quot;.#*&quot;]
</pre>
<p>What this does is exclude the <code>reports</code> from the main <code>rsync</code> and instead setup a new (regular) shared folder that will map <code>reports</code> to <code>/reports</code> on the virtual machine (note this is a root level folder, not in the <code>/app</code> folder on the guest. This is why we have used an environment variable to tell SimpleCov where to output reports.</p>
<h2>Beware the emperor&#8217;s new code coverage</h2>
<p><a href="http://commons.wikimedia.org/wiki/File:Page_45_illustration_in_fairy_tales_of_Andersen_(Stratton).png"><img class="alignnone size-large wp-image-222" src="http://eyefodder.com/wp-content/uploads/2014/09/emperors_new_clothes-754x1024.png" alt="emperors_new_clothes" width="540" height="733" /></a></p>
<p>One thing to bear in mind is that code coverage really is a very crude metric. There are different types of coverage metrics, and SimpleCov only provides &#8216;C0&#8242; coverage: lines of code that executed. Other types include <a href="http://www.bignerdranch.com/blog/code-coverage-and-ruby-1-9/" target="_blank">branch and path</a> coverage, but as far as I know, there aren&#8217;t any tools for these in Ruby. Let me show you an example of where this falls down:</p>
<p><a href="http://eyefodder.com/wp-content/uploads/2014/09/Screen-Shot-2014-09-17-at-10.15.28-AM.png"><img class="alignnone size-large wp-image-221" src="http://eyefodder.com/wp-content/uploads/2014/09/Screen-Shot-2014-09-17-at-10.15.28-AM-1024x826.png" alt="code coverage metrics for a single file" width="540" height="435" /></a></p>
<p>If you look at this report, we can see that the <code>some_method_with_conditionals</code> gets called, but only the <code>say_yes</code> path (lines 12 and 13) executes, and we never confirm that &#8216;no&#8217; gets sent if we pass <code>false</code> to the method. So far, so good, until we look at <code>some_method_with_ternary</code>. This is basically the same method refactored to be more compact, and with the same tests run against it. Yet we are told it is totally covered. So is the metric even still useful?</p>
<p>I still think code coverage is a valuable metric, if only to show you where there are holes in your test suite. If you go in with this knowledge and understanding the limitations, then you will be better equipped to maintain the quality of your app over time.</p>
<h2>Code Coverage is a temporal metric</h2>
<p>The last thing I want to mention about code coverage is that it&#8217;s useful to understand how your coverage changes over time. Particularly if you are managing a team of developers, it provides a quick warning if developers are slipping on their test writing. If you have a Continuous Integration machine, you can track these sort of metrics over time, which can really help you get a sense of where things are headed.<br />
In my next post I&#8217;ll show how to set up your very own CI machine with just a few clicks&#8230;</p>
<p>The post <a rel="nofollow" href="http://eyefodder.com/2014/09/code-coverage-setup-rails.html">Code Coverage — a simple Rails example</a> appeared first on <a rel="nofollow" href="http://eyefodder.com">Eyefodder</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://eyefodder.com/2014/09/code-coverage-setup-rails.html/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Code Coverage with Flex &#8211; a headless agent for CI builds</title>
		<link>http://eyefodder.com/2009/07/code_coverage_with_flex_a_head.html</link>
		<comments>http://eyefodder.com/2009/07/code_coverage_with_flex_a_head.html#comments</comments>
		<pubDate>Tue, 28 Jul 2009 17:29:37 +0000</pubDate>
		<dc:creator><![CDATA[Paul Barnes-Hoggett]]></dc:creator>
				<category><![CDATA[Agile Software Development]]></category>
		<category><![CDATA[Continuous Integration]]></category>
		<category><![CDATA[Flash & Actionscript]]></category>

		<guid isPermaLink="false">http://localhost:8888/?p=33</guid>
		<description><![CDATA[<p>In my last blog post I gave details of how I user the modified code coverage viewer for flex in an automated build to follow the trend of code coverage over time. The trouble with this approach was that there was a problem either with the localConnection in flex or the code that uses it [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://eyefodder.com/2009/07/code_coverage_with_flex_a_head.html">Code Coverage with Flex &#8211; a headless agent for CI builds</a> appeared first on <a rel="nofollow" href="http://eyefodder.com">Eyefodder</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>In my last blog post I gave details of how I user the modified <a href="http://www.eyefodder.com/2009/07/code_coverage_with_flex_runnin.html">code coverage viewer for flex</a> in an automated build to follow the trend of code coverage over time. The trouble with this approach was that there was a problem either with the <a href="http://livedocs.adobe.com/flex/3/html/help.html?content=security2_13.html">localConnection</a> in flex or the code that uses it and there was a wide variance of the values being reported. This post shows you how I fixed it by creating a headless code coverage reporter that you can drop into your test harness and remove the need for a second application altogether.</p>
<p><span id="more-33"></span><br />
In order to try and do this I decided to use as much of the code as possible that the <a href="http://code.google.com/p/flexcover/">FlexCover</a> guys had written, and only change what I needed to get it working.Once I had it up and running I could worry about making it faster / leaner / more neatly coded&#8230;</p>
<h3>The approach</h3>
<p>So here&#8217;s the basic plan:</p>
<ol>
<li>Swap out the default localConnection reporting mechanism for one that will pick and choose depending on environment the app is running in</li>
<li>Create a headless agent that will pull the code coverage data collection and reporting side of the coverageViewer into the app</li>
<li>Use commandline options to swap between the headless and localConnnection agent</li>
<li>Default to using the localConnnection</li>
</ol>
<h3>How it works</h3>
<p>When your app compiles using the instrumented SDK created for using <a href="http://code.google.com/p/flexcover/">FlexCover</a>, your application makes use of a class called CoverageManager. This manager is a patch that allows you to plug in a custom code coverage agent for use in your app. By default it uses a class called <code>LocalConnectionCoverageAgent</code> which broadcasts code coverage metadata to the coverage viewer application. What my patch does is get in and allow you to use a different, headless agent. To do this simply call during the preinitialize event of your main application:</p>
<pre class="brush: as3; title: ; notranslate">
private function injectAgent():void{
CoverageManager.agent = new CoverageAgentSwitch();
}</pre>
<p>As a quick test, run your app as you had previously and confirm it still works by sending data over a localConnection to the CoverageViewer. This is expected as by default, the switch will create a LocalConnectionCoverageAgent. In order to use the headless agent you need to set a few commandline properties. First you need to tell the switch that you want to go headless. Then the headless agent needs to know where it&#8217;s getting its metadata from, where it should output the code coverage report. The commandline options are:</p>
<pre class="brush: bash; title: ; notranslate">
-coverage-agent-type=headless &lt;em&gt;Headless is only option, anything else will default to localConnectionAgent&lt;/em&gt;
-coverage-metadata='/full/path/to/coverage/metadata.cvm'
-coverage-output='/full/path/to/coverage/reportInOriginalFormat.cvr'
-emma-report='/full/path/to/coverage/reportInEmmaFormat.xml'
</pre>
<p>Note that if you don&#8217;t specify the metadata path and at least one of the report formats, the headless agent will log errors but otherwise fail silently.<br />
In terms of the edits I made to get it working, that is pretty much it. The headlessCoverageAgent swc is probably larger than it needs to be and is definitely grossly inefficient. I will update this soon to improve this, but right now I only have time to get this post up.<br />
Obviously you will have to change your build script to pass these new commandline parameters in to the test harness when you run it. If there is any interest, I&#8217;ll post my updated build script and test harness modifications that dispense with the log parser altogether and make for a more repeatable build script.</p>
<h3>The Result</h3>
<p>My build process is now much quicker because I don&#8217;t have to wait to be sure that the coverage viewer has initialized. It&#8217;s also completely stable and the code coverage trend has been a great motivator for the team.<br />
<img src="http://www.eyefodder.com/images/betterCoverage.png" alt="better coverage" width="324" height="266" /><br />
<em>Code Coverage trend without crazy variance found using the external viewer</em><br />
<a href="http://www.eyefodder.com/blog/downloads/headlessCoverageAgent.swc.zip">Download headless library</a></p>
<h3>Update</h3>
<p>A few of you have asked for a copy of the source code so you can play around with it and undoubtedly make it all work better for your needs. Attached here—with no warranty or support—is the <a href="http://www.eyefodder.com/blog/downloads/headlessCoverageAgent_src.zip">headless agent project</a>. In order to get it to work, you will need to have the <a href="http://flexcover.googlecode.com/svn/trunk/CoverageAgent/">CoverageAgent</a> and <a href="http://flexcover.googlecode.com/svn/trunk/CoverageUtilities">CoverageUtilities</a> projects in your workspace, as well as the modified <a href="http://www.eyefodder.com/2009/07/flex_code_coverage_process_par.html">CoverageViewer</a> project from previous posts. Good luck with it &#8211; It&#8217;s code I haven&#8217;t looked at for a while so don&#8217;t hate on me for it :/</p>
<p>The post <a rel="nofollow" href="http://eyefodder.com/2009/07/code_coverage_with_flex_a_head.html">Code Coverage with Flex &#8211; a headless agent for CI builds</a> appeared first on <a rel="nofollow" href="http://eyefodder.com">Eyefodder</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://eyefodder.com/2009/07/code_coverage_with_flex_a_head.html/feed</wfw:commentRss>
		<slash:comments>22</slash:comments>
		</item>
		<item>
		<title>Code Coverage with Flex &#8211; ANT build for running the viewer</title>
		<link>http://eyefodder.com/2009/07/code_coverage_with_flex_runnin.html</link>
		<comments>http://eyefodder.com/2009/07/code_coverage_with_flex_runnin.html#comments</comments>
		<pubDate>Sat, 25 Jul 2009 19:47:08 +0000</pubDate>
		<dc:creator><![CDATA[Paul Barnes-Hoggett]]></dc:creator>
				<category><![CDATA[Agile Software Development]]></category>
		<category><![CDATA[Continuous Integration]]></category>
		<category><![CDATA[Flash & Actionscript]]></category>

		<guid isPermaLink="false">http://localhost:8888/?p=32</guid>
		<description><![CDATA[<p>In my last post, I gave you my elegant extension hack for generating EMMA style code coverage reports from FlexCover. This post covers the first route I took to incorporating this in my build process. It does work, but it&#8217;s not very consistent in its reporting and I&#8217;ll explain why at the end&#8230; So by [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://eyefodder.com/2009/07/code_coverage_with_flex_runnin.html">Code Coverage with Flex &#8211; ANT build for running the viewer</a> appeared first on <a rel="nofollow" href="http://eyefodder.com">Eyefodder</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>In my last post, I gave you my <del>elegant extension</del> hack for generating <a href="http://emma.sourceforge.net/">EMMA</a> style code coverage reports from <a href="http://code.google.com/p/flexcover/">FlexCover</a>. This post covers the first route I took to incorporating this in my build process. It does work, but it&#8217;s not very consistent in its reporting and I&#8217;ll explain why at the end&#8230;</p>
<p><span id="more-32"></span><br />
So by now I had my code coverage viewer happily spitting out EMMA formatted code coverage reports that my <a href="https://wiki.jenkins-ci.org/display/JENKINS/Meet+Jenkins">build machine</a> was just dying to consume. All I had to do was get the running of the code coverage test and the outputting of the report to be an integral part of my build. Here is the sequence of events for my build process:</p>
<ol>
<li>clean my output folders (delete and recreate them)</li>
<li>Compile the main application using the instrumented SDK. This will generate the .cvr metadatafile that the coverageViewer needs</li>
<li>Compile the test harness using the instrumented SDK. This is so that when it runs, the hooks injected by the SDK report coverage data.</li>
<li>Launch the code coverage viewer Application (specifying EMMA output file)</li>
<li>Run the test harness. When it is finished, call CoverageManager.exit() which will close the viewer</li>
<li>Wait for the test results and code coverage report to be available. By implication this means that the test harness and code coverage viewer have quit.</li>
<li>Compile the main app using the standard SDK</li>
<li>(AIR Apps only) remove test harness and any test data from the output folder</li>
<li>(AIR Apps only) package up the application</li>
</ol>
<p>In theory this is all well and good. In practice there are a few problems:</p>
<ul>
<li>The viewer has to launch and parse the code coverage metadata file. There isn&#8217;t a simple way to feed this back to the ant script, so you have to get the script to wait. I set mine to 30 seconds, which should be enough. Nevertheless, this sort of thing frankly just irks me &#8211; either at some point it will fail, and up until that point, the build is taking longer han it needs to.</li>
<li>There are two applications being launched by the build process. In order for this to work, you need to launch the coverageViewer with <code>spawn='true'</code> If something goes wrong with the build, the script no longer has control of this process (I might be wrong on this one &#8211; correct me if so..)</li>
<li>The code coverage agent (that gets injected into our test harness by the instrumented SDK) and the viewer communicate via <a href="http://livedocs.adobe.com/flex/3/html/help.html?content=security2_13.html">LocalConnection</a>. This allows two flash applications on the same machine to talk to one another. Unfortunately, LocalConnections can be kind of flaky and you have to build a separate local connection for each direction of communication.</li>
</ul>
<p>The first two I could live with, but when I put this together, I found that there was quite a wide variance of about 10% from build to build of code coverage values:<br />
<img src="http://www.eyefodder.com/blog/images/coverageCIBad.gif" alt="coverageCIBad.gif" width="325" height="250" /><br />
<em>Code Coverage trend showing wide variance of reporting results</em></p>
<p>I was left with one of two conclusions:</p>
<ol>
<li>There was a bug/ fragility in the localConnection and it couldn&#8217;t be relied on</li>
<li>There was a bug in the coverageAgent.exit() code sequence</li>
</ol>
<p>The way CoverageAgent.exit() works is that you call it from within your application and it:</p>
<ol>
<li>Waits until the application has sent all remianing code coverage data</li>
<li>Sends a message to the code coverage viewer asking it to prepare for exit</li>
<li>The viewer writes out any report files</li>
<li>Exits, and sends a message back to the main application telling it to exit</li>
</ol>
<p>Somewhere in this back and forth code coverage data was getting lost, and occasionally the build was failing because the report file was not getting written. I could have attempted to debug the code, but in my experience, debugging LocalConnections is a royal ballache. In the end I decided to bring the reporting side of the viewer &#8216;in-house&#8217; and avoid the need for a second application altogether. My next post will show this and give you a download to the swc so that you can do it yourself&#8230;<br />
Just in case you are interested, I have attached the buildfile I used for running the coverageViewer application as part of the build. If you can wait till I post the next update to this, I&#8217;d suggest you do <img src="http://eyefodder.com/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /><br />
<a href="http://www.eyefodder.com/blog/downloads/coverageViewerANTBuild.zip">Download build file archive</a></p>
<p>The post <a rel="nofollow" href="http://eyefodder.com/2009/07/code_coverage_with_flex_runnin.html">Code Coverage with Flex &#8211; ANT build for running the viewer</a> appeared first on <a rel="nofollow" href="http://eyefodder.com">Eyefodder</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://eyefodder.com/2009/07/code_coverage_with_flex_runnin.html/feed</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>Code Coverage with Flex &#8211; creating EMMA formatted reports</title>
		<link>http://eyefodder.com/2009/07/flex_code_coverage_process_par.html</link>
		<comments>http://eyefodder.com/2009/07/flex_code_coverage_process_par.html#comments</comments>
		<pubDate>Fri, 24 Jul 2009 15:16:58 +0000</pubDate>
		<dc:creator><![CDATA[Paul Barnes-Hoggett]]></dc:creator>
				<category><![CDATA[Agile Software Development]]></category>
		<category><![CDATA[Continuous Integration]]></category>
		<category><![CDATA[Flash & Actionscript]]></category>

		<guid isPermaLink="false">http://localhost:8888/?p=31</guid>
		<description><![CDATA[<p>Details on my hack for creating EMMA formatted code coverage reports using FlexUnit</p>
<p>The post <a rel="nofollow" href="http://eyefodder.com/2009/07/flex_code_coverage_process_par.html">Code Coverage with Flex &#8211; creating EMMA formatted reports</a> appeared first on <a rel="nofollow" href="http://eyefodder.com">Eyefodder</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Over the last few months I have adopted <a href="https://hudson.dev.java.net/">Hudson</a> as my build machine of choice as it is just so easy to setup and administer. Another thing I really like is being able to watch the trend of the number of tests in my test harness over time. It&#8217;s not the best metric, but it does act as a reasonable motivator.<br />
A slightly less crude metric is <a href="http://en.wikipedia.org/wiki/Code_coverage">code coverage</a>, which measures the amount of an application that gets exercised when it&#8217;s run. <a href="http://code.google.com/p/flexcover/">FlexCover</a> is a very cool tool for this and props to my colleague &#8211; <a href="http://blogs.adobe.com/auhlmann/">Alex Uhlmann</a> and <a href="http://joeberkovitz.com/">Joe Berkowitz</a> of Allurent for the great work they&#8217;ve done. There is a great UI for exploring code coverage in detail and it can also export xml formatted reports on coverage.<br />
The thing is, I want to be able to track code coverage over time in Hudson, just like I can with the number of tests. I achieved this by extending FlexCover to output <a href="http://emma.sourceforge.net/">EMMA</a> formatted reports&#8230;</p>
<p><span id="more-31"></span><br />
So as I was starting to look at this, I could see one of three paths:</p>
<ol>
<li>Create a Hudson plugin to consume flexCover&#8217;s code coverage report format</li>
<li>Add some sort of XSLT transform to my build process</li>
<li>Modify FlexCover to be able to output a report format that Hudson understands</li>
</ol>
<p>Creating a plugin for Hudson certainly seemed like a possibility, but the plugins for EMMA and Cobertura were already there and stable, so it seemed like it would be much simpler to try and create a code coverage report in a format one of these plugins would understand. Creating an XSLT would work for this, but I&#8217;m not an XML guroid, so I figured I&#8217;d go down the route of the simplest thing that would work for me and <del>hack</del> extend flexcover to be able to output EMMA formatted code coverage reports.<br />
As it turns out this was a pretty simple job, and after a few hours work I managed to create a patch that allows you to output .cvr or EMMA reports with an additional commandline argument. Alex, Joe and I have talked and although my way of doing it works, it is not designed to be scalable or modular to support more formats. They are incorporating the patch, but will probably rework it with these things in mind. They are super busy guys and it wont happen quickly; if you want to in the meantime you can grab the patch from here, but bear in mind it is offered with absolutely no warranties or support <img src="http://eyefodder.com/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /></p>
<h4>Installation</h4>
<p>First thing you need to do is checkout the FlexCover code from Google code  (http://flexcover.googlecode.com/svn/trunk/CoverageViewer) and patch it with the file attatched. The patch is created from the project level down. To do this, you can right click the file from within Eclipse and select &#8216;Team&gt;Apply Patch&#8230;&#8217; Follow the instructions and you should be good to go&#8230;</p>
<h4>Usage</h4>
<p>In order to write out the code coverage report file with the viewer, you need to supply the commandline argument:</p>
<pre class="brush: bash; title: ; notranslate">-output /full/path/to/my/flexcoverreport.xml</pre>
<p>This functionality is unchanged, but now you can also specify an EMMA formatted file using the following:</p>
<pre class="brush: bash; title: ; wrap-lines: true; notranslate">-emma-report /full/path/to/my/EmmaReportName.xml</pre>
<p>And that&#8217;s about it! My next post covers the challenges I had integrating this into my build process, and how I overcame them&#8230;</p>
<p><a href='http://www.eyefodder.com/downloads/emmaReportPatch2.txt'>Download patch file</a></p>
<h3>Update</h3>
<p>Sorry this has taken so long to get together, and work out what was going on. I got swamped under a tonne of work, and neglected to update this <img src="http://eyefodder.com/wp-includes/images/smilies/icon_sad.gif" alt=":(" class="wp-smiley" /></p>
<p>The link above is for an updated patch file (and note &#8211; I did this patch from the root level of the project), and for your pleasure below is the infamous missing class. I had to hunt through my old email as my old dev machine died a way back and appalingly I didn&#8217;t have the file under source control (I think it&#8217;s because I was patching someone elses repository). Anyhoo &#8211; enough excuses; here is the missing class: <a href='http://www.eyefodder.com/downloads/EMMAReportAdapter.as'>now renamed EMMAReportAdapter</a></p>
<p>The post <a rel="nofollow" href="http://eyefodder.com/2009/07/flex_code_coverage_process_par.html">Code Coverage with Flex &#8211; creating EMMA formatted reports</a> appeared first on <a rel="nofollow" href="http://eyefodder.com">Eyefodder</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://eyefodder.com/2009/07/flex_code_coverage_process_par.html/feed</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>Continuous Integration with Flex &#8211; a better log parser</title>
		<link>http://eyefodder.com/2007/06/continuous_integration_with_fl_7.html</link>
		<comments>http://eyefodder.com/2007/06/continuous_integration_with_fl_7.html#comments</comments>
		<pubDate>Wed, 13 Jun 2007 10:01:34 +0000</pubDate>
		<dc:creator><![CDATA[Paul Barnes-Hoggett]]></dc:creator>
				<category><![CDATA[Continuous Integration]]></category>
		<category><![CDATA[Flash & Actionscript]]></category>

		<guid isPermaLink="false">http://localhost:8888/?p=28</guid>
		<description><![CDATA[<p>About a year ago, I posted a six part series explaining how to set up a continuous integration process for your Flex projects. Since then I have been refining the process when I have had a spare moment. One of the hassles I found when trying to setup continuous integration on a new machine was [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://eyefodder.com/2007/06/continuous_integration_with_fl_7.html">Continuous Integration with Flex &#8211; a better log parser</a> appeared first on <a rel="nofollow" href="http://eyefodder.com">Eyefodder</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>About a year ago, I posted a six part series explaining how to set up a <a title="Continuous Integration with Flex – Introduction" href="http://www.eyefodder.com/2006/05/continuous_integration_with_fl.html">continuous integration</a> process for your Flex projects. Since then I have been refining the process when I have had a spare moment. One of the hassles I found when trying to setup continuous integration on a new machine was getting the <a title="Continuous Integration with Flex – Part 4" href="http://www.eyefodder.com/2006/05/continuous_integration_with_fl_4.html">python based flash log parser</a> working. I decided to remove the python dependency altogether and create a jar that parses the flash logs.</p>
<p><span id="more-28"></span><br />
So, grab the <a href="http://www.eyefodder.com/blog/downloads/FlashLogParser.jar.zip">jar from here</a> and unzip it to your externals/lib directory. Then add the following properties to your build:</p>
<pre class="brush: xml; title: ; wrap-lines: true; notranslate">&lt;property name=&quot;logParser&quot; value=&quot;${lib}/FlashLogParser.jar&quot;/&gt;
&lt;property name=&quot;flashStatus.location&quot; value=&quot;${logs}/status.txt&quot;/&gt;
&lt;property name=&quot;flashOutput.location&quot; value=&quot;${logs}/TEST-testOutput.xml&quot;/&gt;
</pre>
<p>Then, change your parseFlashLog target to look like this:</p>
<pre class="brush: xml; title: ; notranslate">&lt;target name=&quot;parseFlashLog&quot; description=&quot;parses flash log&quot; depends=&quot;clean&quot; &gt;
&lt;java jar=&quot;${logParser}&quot; failonerror=&quot;true&quot; fork=&quot;true&quot;&gt;
&lt;arg line=&quot;'${flashlog.location}'&quot;/&gt;
&lt;arg line=&quot;'${flashStatus.location}'&quot;/&gt;
&lt;arg line=&quot;'${flashOutput.location}'&quot;/&gt;
&lt;/java&gt;
&lt;/target&gt;
</pre>
<p>Now you should be good to go&#8230;<br />
In my next post I am going to do a full dissection of my latest build file &#8211; I have spent some time refining it over the last year, and I think its probably helpful to spend some time looking at it bit by bit&#8230;</p>
<p>The post <a rel="nofollow" href="http://eyefodder.com/2007/06/continuous_integration_with_fl_7.html">Continuous Integration with Flex &#8211; a better log parser</a> appeared first on <a rel="nofollow" href="http://eyefodder.com">Eyefodder</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://eyefodder.com/2007/06/continuous_integration_with_fl_7.html/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>CruiseControl on the Mac &#8211; modifying the build script to work x-platform</title>
		<link>http://eyefodder.com/2007/06/cruisecontrol_on_the_mac_modif.html</link>
		<comments>http://eyefodder.com/2007/06/cruisecontrol_on_the_mac_modif.html#comments</comments>
		<pubDate>Tue, 12 Jun 2007 16:10:22 +0000</pubDate>
		<dc:creator><![CDATA[Paul Barnes-Hoggett]]></dc:creator>
				<category><![CDATA[Continuous Integration]]></category>
		<category><![CDATA[Flash & Actionscript]]></category>
		<category><![CDATA[Test Driven Development]]></category>

		<guid isPermaLink="false">http://localhost:8888/?p=27</guid>
		<description><![CDATA[<p>So, I thought I was doing pretty well, getting svn working on the mac, installing cruisecontrol for my continuous integration, even getting SCPlugin working with unsigned certificates. Then I tried to run my ant build, and ended up having all sorts of problems getting my mac debug player to run. Some investigating and help from [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://eyefodder.com/2007/06/cruisecontrol_on_the_mac_modif.html">CruiseControl on the Mac &#8211; modifying the build script to work x-platform</a> appeared first on <a rel="nofollow" href="http://eyefodder.com">Eyefodder</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>So, I thought I was doing pretty well, getting svn working on the mac, installing <a href="http://cruisecontrol.sourceforge.net/overview.html">cruisecontrol</a> for my continuous integration, even getting <a href="http://scplugin.tigris.org/servlets/ProjectProcess?pageID=rgnEkt">SCPlugin</a> working with unsigned certificates. Then I tried to run my ant build, and ended up having all sorts of problems getting my mac debug player to run. Some investigating and help from the ANT folks later, and I have a solution</p>
<p><span id="more-27"></span><br />
So, when I ran my ant build, all went well until the <code>runTest</code> target executed (or at least tried to execute). I got the following build error:</p>
<pre class="brush: java; title: ; wrap-lines: true; notranslate">
Execute failed: java.io.IOException: debugPlayerMac.app cannot execute
</pre>
<p>You see the problem is that on a mac, the standalone player (like other applications on the Mac) is actually a folder containing all sorts of cleverness inside. Ant doesn&#8217;t know how to execute a folder, so I was a bit stuck. Until, after lots of digging, I found the answer in an old <a href="http://osflash.org/">osflash</a> mailing list <a href="http://readlist.com/lists/osflash.org/osflash/0/1908.html">archive</a>. Basically I have to use the &#8216;open&#8217; command to launch the app &#8211; something like this:</p>
<p>&nbsp;</p>
<pre class="brush: xml; title: ; notranslate">&lt;exec executable=&quot;open&quot;&gt;
&lt;arg line=&quot;${pathToFlashPlayer}&quot;/&gt;
&lt;arg line=&quot;${pathToSWFToPlay}&quot;/&gt;
&lt;/exec&gt;</pre>
<p>I tried this, and it worked nicely. The problem is that my ant build needs to wait until the test harness runs and closes the player before reading the log and parsing the results for cruisecontrol to consume. In the ant script above, the open command only stalls the ant script until it has finished doing its opening magic, so ant gets upset as the test results haven&#8217;t been written yet. I slept on this, then couldn&#8217;t think of an answer so I sent an email to the <a href="http://www.nabble.com/Running-.app-on-Mac-OSX-tf3908536.html">ant users </a> mailing list. Mere minutes later I got a <a href="http://www.nabble.com/Running-.app-on-Mac-OSX-tf3908536.html">reply</a> that helped me work out how to crack this.<br />
Basically, ant has the ability to run two threads, and move on when they are both complete. So in one thread we launch the player, and in another thread we first wait for the file to be available, then we do a check to see if it contains what we want (I am testing against the flag <code>-----------------TESTRUNNEROUTPUTENDS----------------</code> which I used in my result printer).  It worked like a charm:</p>
<pre class="brush: xml; title: ; notranslate">&lt;target name=&quot;runTest&quot; description=&quot;runs the test harness&quot; depends=&quot;compileTest&quot;&gt;
&lt;parallel&gt;
&lt;exec executable=&quot;open&quot; spawn=&quot;no&quot;&gt;
&lt;arg line=&quot;${debugPlayer}&quot;	/&gt;
&lt;arg line=&quot;'${testHarness.swf}'&quot;/&gt;
&lt;/exec&gt;
&lt;sequential&gt;
&lt;waitfor&gt;
&lt;available file=&quot;${flashlog.location}&quot;/&gt;
&lt;/waitfor&gt;
&lt;waitfor&gt;
&lt;isfileselected file=&quot;${flashlog.location}&quot;&gt;
&lt;contains text=&quot;-----------------TESTRUNNEROUTPUTENDS----------------&quot;/&gt;
&lt;/isfileselected&gt;
&lt;/waitfor&gt;
&lt;/sequential&gt;
&lt;/parallel&gt;
&lt;/target&gt;
</pre>
<p>The last step to making this properly cross platform for cruisecontrol was to have the right <code>&lt;exec&gt;</code> command called depending on the OS. Fortunately there is an <code>os</code> attribute you can specify on <code>&lt;exec&gt;</code> and the task will only run if the os matches the platform you are running the task on. A few minutes later, I had my amended runTest target:</p>
<pre class="brush: xml; title: ; notranslate">&lt;target name=&quot;runTest&quot; description=&quot;runs the test harness&quot; depends=&quot;compileTest&quot;&gt;
&lt;parallel&gt;
&lt;exec executable=&quot;${debugPlayerWin}&quot; spawn=&quot;no&quot; os=&quot;Windows XP&quot;&gt;
&lt;arg line=&quot;'${testHarness.swf}'&quot;/&gt;
&lt;/exec&gt;
&lt;exec executable=&quot;open&quot; spawn=&quot;no&quot; os=&quot;Mac OS X&quot;&gt;
&lt;arg line=&quot;${debugPlayerMac}&quot;	/&gt;
&lt;arg line=&quot;'${testHarness.swf}'&quot;/&gt;
&lt;/exec&gt;
&lt;sequential&gt;
&lt;waitfor&gt;
&lt;available file=&quot;${flashlog.location}&quot;/&gt;
&lt;/waitfor&gt;
&lt;waitfor&gt;
&lt;isfileselected file=&quot;${flashlog.location}&quot;&gt;
&lt;contains text=&quot;-----------------TESTRUNNEROUTPUTENDS----------------&quot;/&gt;
&lt;/isfileselected&gt;
&lt;/waitfor&gt;
&lt;/sequential&gt;
&lt;/parallel&gt;
&lt;/target&gt;
</pre>
<p>I&#8217;m planning to post my completed project build script and step through it in more detail than I did <a href="http://www.eyefodder.com/blog/2006/05/continuous_integration_with_fl_2.shtml">previously</a>. I&#8217;ve updated it quite a lot in the last year, and so far, I think its a lot neater. Before I do that, I&#8217;m going to post about my removing the python dependency from my continuous integration process.</p>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="http://eyefodder.com/2007/06/cruisecontrol_on_the_mac_modif.html">CruiseControl on the Mac &#8211; modifying the build script to work x-platform</a> appeared first on <a rel="nofollow" href="http://eyefodder.com">Eyefodder</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://eyefodder.com/2007/06/cruisecontrol_on_the_mac_modif.html/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Restarting cruisecontrol on Mac OSX</title>
		<link>http://eyefodder.com/2007/06/restarting_cruisecontrol_on_ma.html</link>
		<comments>http://eyefodder.com/2007/06/restarting_cruisecontrol_on_ma.html#comments</comments>
		<pubDate>Mon, 11 Jun 2007 10:30:48 +0000</pubDate>
		<dc:creator><![CDATA[Paul Barnes-Hoggett]]></dc:creator>
				<category><![CDATA[Continuous Integration]]></category>

		<guid isPermaLink="false">http://localhost:8888/?p=26</guid>
		<description><![CDATA[<p>OK, so for those of you who know Unix better than me (which is probably most of you) this post will be like teaching your granny to suck eggs, but for the rest of us, it took me some working out how to stop and start the cruisecontrol server instance on the mac&#8230; You see, [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://eyefodder.com/2007/06/restarting_cruisecontrol_on_ma.html">Restarting cruisecontrol on Mac OSX</a> appeared first on <a rel="nofollow" href="http://eyefodder.com">Eyefodder</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>OK, so for those of you who know Unix better than me (which is probably most of you) this post will be like teaching your granny to suck eggs, but for the rest of us, it took me some working out how to stop and start the cruisecontrol server instance on the mac&#8230;</p>
<p><span id="more-26"></span><br />
You see, on windows, I just press ctrl+c and the process terminates so you can then restart the sever. Well, on the mac, that doesn&#8217;t happen; if you ctrl+c, the server instance still runs. So, thanks to some <a href="http://sourceforge.net/mailarchive/forum.php?thread_name=C28F09C4.5904%25pbh%40adobe.com&amp;forum_name=cruisecontrol-user">friendly assistance</a>, I found out how to kill the server. What you need to do first is type <code>ps -e</code>. This will list all the processes running. The one you are looking for will look something like this:</p>
<pre class="brush: bash; title: ; notranslate">
3267   p2  S  0.15.38   /System/Library/Frameworks/JavaVM.framework/Home/bin/java -Djavax.management.build
</pre>
<p>.<br />
When you have that, note the PID, and then type <code>kill -9 XXXX</code> where XXXX is the pid you found in the previous step.</p>
<p>The post <a rel="nofollow" href="http://eyefodder.com/2007/06/restarting_cruisecontrol_on_ma.html">Restarting cruisecontrol on Mac OSX</a> appeared first on <a rel="nofollow" href="http://eyefodder.com">Eyefodder</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://eyefodder.com/2007/06/restarting_cruisecontrol_on_ma.html/feed</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>CruiseControl on Mac OSX</title>
		<link>http://eyefodder.com/2007/06/cruisecontrol_on_mac_osx.html</link>
		<comments>http://eyefodder.com/2007/06/cruisecontrol_on_mac_osx.html#comments</comments>
		<pubDate>Fri, 08 Jun 2007 12:25:38 +0000</pubDate>
		<dc:creator><![CDATA[Paul Barnes-Hoggett]]></dc:creator>
				<category><![CDATA[Continuous Integration]]></category>

		<guid isPermaLink="false">http://localhost:8888/?p=25</guid>
		<description><![CDATA[<p>So, I&#8217;ve got this shiny new mac provided by my new employers, and so I figured I&#8217;d put it to use as a CruiseControl build manager. I found the process reasonably simple but, just like the process of setting up Subversion and SCPlugin, there are a couple of extra steps I figured I&#8217;d share&#8230; First [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://eyefodder.com/2007/06/cruisecontrol_on_mac_osx.html">CruiseControl on Mac OSX</a> appeared first on <a rel="nofollow" href="http://eyefodder.com">Eyefodder</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>So, I&#8217;ve got this shiny new mac provided by my new <a href="http://www.adobe.com">employers</a>, and so I figured I&#8217;d put it to use as a CruiseControl build manager. I found the process reasonably simple but, just like the process of setting up<a href="http://www.eyefodder.com/blog/2007/06/subversion_and_finder_integrat.shtml"> Subversion and SCPlugin</a>, there are a couple of extra steps I figured I&#8217;d share&#8230;</p>
<p><span id="more-25"></span><br />
First step, download the latest source from the downloads page <a href="http://cruisecontrol.sourceforge.net/download.html">here</a>. Simply expand that to where you want your build instance to live. Next open up a terminal window, and type the following:<br />
<code>cruisecontrol.sh</code><br />
More than likely, you will get the following error message:<br />
<code>-bash: cruisecontrol.sh: command not found</code><br />
More hunting on the mighty <a href="http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_02_01.html">intergoogle</a> and a quick look at &#8216;bash guide for beginners&#8217; told me the answer &#8211; I need to tell bash which shell to use to execute the cruisecontrol.sh script. I found either of the following lines worked:</p>
<pre class="brush: bash; title: ; notranslate">

sh cruisecontrol.sh
bash -x cruisecontrol.sh

</pre>
<p>I&#8217;m not a Unix geek, so I don&#8217;t pretend to know what the difference is, or which is preferable &#8211; if anyone can tell me I&#8217;d love to know..</p>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="http://eyefodder.com/2007/06/cruisecontrol_on_mac_osx.html">CruiseControl on Mac OSX</a> appeared first on <a rel="nofollow" href="http://eyefodder.com">Eyefodder</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://eyefodder.com/2007/06/cruisecontrol_on_mac_osx.html/feed</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>FlexUnit for Cruise Control &#8211; XML output</title>
		<link>http://eyefodder.com/2006/06/flexunit_for_cruise_control_xm.html</link>
		<comments>http://eyefodder.com/2006/06/flexunit_for_cruise_control_xm.html#comments</comments>
		<pubDate>Fri, 16 Jun 2006 13:43:53 +0000</pubDate>
		<dc:creator><![CDATA[Paul Barnes-Hoggett]]></dc:creator>
				<category><![CDATA[Continuous Integration]]></category>
		<category><![CDATA[Flash & Actionscript]]></category>
		<category><![CDATA[Test Driven Development]]></category>

		<guid isPermaLink="false">http://localhost:8888/?p=16</guid>
		<description><![CDATA[<p>If you read my earlier posts on Flex and continuous integration, you will remember that we had to do some work to get ASUnit to spit out its results in a manner that would be understood by Cruise Control. We built a log parser to parse results from the Flash players trace file into an [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://eyefodder.com/2006/06/flexunit_for_cruise_control_xm.html">FlexUnit for Cruise Control &#8211; XML output</a> appeared first on <a rel="nofollow" href="http://eyefodder.com">Eyefodder</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>If you read my earlier posts on Flex and continuous integration, you will remember that we had to <a href="http://www.eyefodder.com/blog/2006/05/continuous_integration_with_fl_3.shtml">do some work</a> to get ASUnit to spit out its results in a manner that would be understood by Cruise Control. We <a href="http://www.eyefodder.com/blog/2006/05/continuous_integration_with_fl_4.shtml">built a log parser</a> to parse results from the Flash players trace file into an XML file detailing the unit test results, and also a simple status file saying if the test suite succeeded or failed. Here&#8217;s how I got FlexUnit to print out results that will be understood by our parser&#8230;</p>
<p><span id="more-16"></span></p>
<p>I based my work here on how the flexunit.textui.TestRunner and flexunit.textui.ResultPrinter class work. I created two classes: CruiseControlTestRunner and CruiseControlResultPrinter.</p>
<h2>CruiseControlTestRunner</h2>
<p>This class is incredibly similar to TestRunner. In fact the only difference is that our private variable printer is set up to use our other new class &#8211; CruiseControlResultPrinter. If this variable had been declared protected, then I would have subclassed TestRunner and simply changed the setup of the printer to use our new class&#8230; <a href="http://www.eyefodder.com/blog/downloads/CruiseControlTestRunner.as">You can download the file here</a></p>
<h2>CruiseControlTestPrinter</h2>
<p>The printer is a little more involved, but still not too tricky. what I do here is create an XML Object and append results as they come through. TestListeners (the interface that the printer implements) have functions that get triggered when a test starts and ends, and also if a test fails, or if there is an error. These can be used to construct our XML as the suite runs. Lets look at the class bit by bit: (or you can just <a href="http://www.eyefodder.com/blog/downloads/CruiseControlResultPrinter.as">download the class and get started</a></p>
<h3>Constructor</h3>
<p>Dead simple, we just initialize our XML with a top-level node:</p>
<pre class="brush: as3; title: ; notranslate">public function CruiseControlResultPrinter()
{
	__resultXML = new XML(&quot;&quot;);
}</pre>
<h3>startTest</h3>
<p>When a test starts, we want to append a testcase node to our XML. Each testcase node actually needs to sit in a testsuite node, so I created a helper function to automatically create a testsiute node for the testcase if one doesnt exist (ie if its the first test run in a particular Test Class):</p>
<pre class="brush: as3; title: ; notranslate"> public function startTest( test:Test ):void
{
	var suiteNode:XML = getSuiteNode(test)
	suiteNode.appendChild(new XML(&quot;&quot;));
	__testTimer = getTimer();
}
//------------------------------------------------------------------------------
private function getSuiteNode (test:Test):XML{
	var outNode:XML = __resultXML.testsuite.(@name==test.className)[0];
	if(outNode==null){
	outNode = new XML(&quot;&quot;);
	__resultXML.appendChild(outNode);
}
return outNode;
}</pre>
<h3>addError / addFailure</h3>
<p>When a test fails or errors, we need to append failure details to the testcase node. The child node is named either failure or error depending on the type of problem (these nodes get displayed differently in CruiseControl). A child node populated with the stack trace is appended to the testcase node:</p>
<pre class="brush: as3; title: ; notranslate">
public function addError( test:Test, error:Error ):void{
	onFailOrError(test,error,&quot;error&quot;);
}
//------------------------------------------------------------------------------
public function addFailure( test:Test, error:AssertionFailedError ):void{
	onFailOrError(test,error,&quot;failure&quot;);
}
private function onFailOrError(test:Test,error:Error, failOrError:String):void{
	__suiteSuccess = false;
	var testNode:XML = getTestNode(test);
	var childNode:XML = new XML(&quot;&amp;lt;&quot;+failOrError+&quot;&amp;gt;&quot;+error.getStackTrace()+&quot;&lt;!--&quot;+failOrError+&quot;--&gt;&quot;);
	testNode.appendChild(childNode);
}
private function getTestNode(test:Test):XML{
	return __resultXML.testsuite.testcase.(@name==TestCase(test).methodName)[0];
}</pre>
<h3>endTest</h3>
<p>If you look at the startTest code, you will see that we set a __testTimer variable. This comes into play in the endTest callback, where we set the execution time of the test Case:</p>
<pre class="brush: as3; title: ; notranslate">
public function endTest( test:Test ):void
{
	var testNode:XML = getTestNode(test);
	testNode.@time = (getTimer() - __testTimer)/1000;
}</pre>
<h3>print</h3>
<p>When the test suite is complete, the TestRunner calls the print function on our class. This is actually now a much simpler affair than the one I previously wrote for ASUnit (although I will be revisiting this for ASUnit for AS3 over the next few days&#8230;). It simply traces out our generated XML, and also includes the line at the bottom specifying test Suite success (this is read by our ANT build):</p>
<pre class="brush: as3; title: ; notranslate">
public function print( result:TestResult, runTime:Number ):void
{
	printHeader(runTime);
	printMain();
	printFooter(result);
}
private function printHeader( runTime:Number ):void
{
	trace(&quot;-----------------TESTRUNNEROUTPUTBEGINS----------------&quot;);
}
private function printMain():void{
	trace(__resultXML);
}
private function printFooter( result:TestResult ):void
{
	trace(&quot;Test Suite success: &quot;+(result.errorCount()+result.failureCount()==0)+&quot;n&quot;);
	trace(&quot;-----------------TESTRUNNEROUTPUTENDS----------------&quot;);
}</pre>
<p>And that&#8217;s basically it.. My next post is going to be about how I actually went about implementing these classes and got the test siute to display results differently depending on whether it was being run by the developer or as part of an ANT build&#8230;</p>
<p>The post <a rel="nofollow" href="http://eyefodder.com/2006/06/flexunit_for_cruise_control_xm.html">FlexUnit for Cruise Control &#8211; XML output</a> appeared first on <a rel="nofollow" href="http://eyefodder.com">Eyefodder</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://eyefodder.com/2006/06/flexunit_for_cruise_control_xm.html/feed</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>Unit test frameworks for AS3 and Continuous Integration</title>
		<link>http://eyefodder.com/2006/06/unit_test_frameworks_for_as3_a.html</link>
		<comments>http://eyefodder.com/2006/06/unit_test_frameworks_for_as3_a.html#comments</comments>
		<pubDate>Thu, 15 Jun 2006 10:40:36 +0000</pubDate>
		<dc:creator><![CDATA[Paul Barnes-Hoggett]]></dc:creator>
				<category><![CDATA[Continuous Integration]]></category>
		<category><![CDATA[Flash & Actionscript]]></category>
		<category><![CDATA[Test Driven Development]]></category>

		<guid isPermaLink="false">http://localhost:8888/?p=14</guid>
		<description><![CDATA[<p>Im currently evaluating FlexUnit and ASUnit as we move over to AS3 and seeing how they will fit in with our continuous integration suite. As you may have read in my previous posts on CI, we ended up significantly reworking ASUnit to get it to integrate with our needs for CI. What we are really [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://eyefodder.com/2006/06/unit_test_frameworks_for_as3_a.html">Unit test frameworks for AS3 and Continuous Integration</a> appeared first on <a rel="nofollow" href="http://eyefodder.com">Eyefodder</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Im currently evaluating FlexUnit and ASUnit as we move over to AS3 and seeing how they will fit in with our continuous integration suite. As you may have read in my previous posts on CI, we ended up significantly reworking ASUnit to get it to integrate with our needs for CI. What we are really looking for now is a framework we do not have to monkey around with too much to acheive our needs. The unit test framework we use needs to fulfil the following:</p>
<ul>
<li>Simple syncronous tests</li>
<li>asynchronous tests</li>
<li>asynchronous setup (e.g. if you ned to load in some data before performing the tests)</li>
<li>printing the test result out to a log file so that Cruise Control can interpret the results</li>
<li>broadcast some sort of event when the test suite is complete so we can close the test harness and continue with the build</li>
<li>works equally well outside of the flex framework</li>
</ul>
<p>I&#8217;ll be looking at both <a href="http://labs.adobe.com/wiki/index.php/ActionScript_3:resources:apis:libraries#FlexUnit">FlexUnit</a> and <a href="http://www.asunit.org/">ASUnit</a> to see which one will work for our needs best. Ideally I would like to offer up solutions to using both so that you can fit CI into any testing framework you are currently using. I&#8217;ll post my findings as I find them, so stay posted&#8230;</p>
<p>The post <a rel="nofollow" href="http://eyefodder.com/2006/06/unit_test_frameworks_for_as3_a.html">Unit test frameworks for AS3 and Continuous Integration</a> appeared first on <a rel="nofollow" href="http://eyefodder.com">Eyefodder</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://eyefodder.com/2006/06/unit_test_frameworks_for_as3_a.html/feed</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
	</channel>
</rss>
