Wednesday, December 9, 2009

After All This, Did I Really Learn Anything?

The semester is coming to a close, so this will be my final blog post relating to ICS 613 - Software Engineering. I have verbally committed to a Master's project/thesis with Philip Johnson, so posts on this blog will in all likelihood continue. I also tell myself that I should record some thoughts relating to work on other projects (like my iPhone applications), but we'll see if I ever act on that. Perhaps I'll update my Twitter more often (@keokilee). I'll have more time after this semester since the Master's project/thesis is all I need to complete to graduate. No more classes!

On the very first day of class, Philip pulled me over and told me that he was concerned that I wouldn't learn anything. After all, I had taken the undergraduate version of the class 3 years ago and most of the content is the same. Admittedly, I knew that and still took the class because I know that I want to graduate and go into the industry and find a software development job. I wanted my resume to say that I took a graduate level course in software engineering. So I was a bit selfish, and I feel a little bad about that. Only a little because despite knowing most of the content, I still had to do the work. And there was a lot of work this semester.

One thing that has changed is the "professional persona" component of the course. During the first month, I set up this blog, created a new resume, and set up a Google Site for employers to look at and see my accomplishments. I also set up my LinkedIn and TechHui memberships, both of which I hope to spend more time on now that the semester is ending. At this point, I haven't heard anything from potential employers, but I hope it improves my chances of getting a job.

What I had somewhat forgotten is how important it is to meet regularly when you're working in a group. This isn't to put down my group members, because I think we cranked out some awesome applications. It's just that at the graduate level, it is common to have a part time or full time job while taking classes. It's more difficult to get people together since people can be so busy. I had often made the comment that 613 students have the leg up on 413 students because grad school is our life. But perhaps it is the other way around; that 413 students have the leg up because we tend to have life (aka time-consuming jobs) take up our time.

But the most important thing I got out of this class was the chance to work on new and interesting applications. The projects (save for Robocode in the beginning) were significant and are definitely things to be proud of. These things are definitely going up on my Google site as soon as I have the time to create the screenshots and do the write up. And I mentioned the Master's project/thesis, which will involve further work on Philip's sustainability research. These opportunities alone make taking the class more than worthwhile.

What made the class even more enjoyable was my fellow classmates. I think that we became a pretty tight knit group (which also happened when I took 413). Many of us hung out and tossed around ideas outside of class, even if we were in separate groups. So shout-outs to Aaron, BJ, Dean, Lyneth, Wahib, and Yichi.

And with that, I bid 613 a fond adieu. I wish my fellow students who are working on extra credit all the best and I hope they do well.

E Hoʻomaluō - Version 2.0

After this somewhat long hiatus, I'm back to blogging about software engineering. Over the past few weeks, we worked on improvements to our original web application based on feedback from both Philip and the code reviews. However, we were also tasked with implementing two new pages; a stoplight page that shows the current carbon status and a grid information page that shows a chart of the energy generated by the sources on the island. We already went through the pains of learning Apache Wicket for the first time, so this should be a piece of cake, right?

As is often the case, things are more difficult than they appear. Sticking to our original vision of having AJAX for everything, we decided to implement everything using AJAX. This includes using an AJAX tabbed panel in Wicket to switch between the different pages. This task turned out to be pretty easy, but it also lead to our biggest discovery: Panels as partial HTML.

Panels in Wicket have their own HTML and Java files. They are not to be used as pages in the application; instead, they are designed to be inserted into an existing page. This could be because you want to reuse the component multiple times. In our application, I created a custom loading component that is used twice and can be used even more. However, I also found panels to be useful in that they separate components of a page into separate HTML files. For example, the grid information page has a header, form, and chart. Instead of having these components all in one file, they can be separated into three parts. Given that HTML support in Eclipse is fairly lacking, this made it easier to read since everything isn't in one long file.

What really completes panels though is that they can also be tested separately. Since the form and graph are dynamically generated, they are typically difficult to test. However, the WicketTester class has an option to start from a particular panel class. Then we can test the panel as if it were a page in Wicket, even though it's only a partial.

I also feel that I got more experience doing some web design. I've done some basic web design before, but this is the first time where I was involved with a significant application that was designed by me from the ground up. And I am very much satisfied with the design of the application. The application could've been cleaned up a little, but we simply ran out of time.

I do think our group process left a lot to be desired. The four of us rarely met all at once. Aaron and I spent some time outside of class and I think we got a lot done. But it would've been nice to get all 4 of us together more often. I took a more hands off approach for Dean's and Yichi's work because we rarely met. Our design and code would've been more cohesive if we all got together and worked on it.

Overall, I am quite satisfied with the application. There were many places where we stumbled a little, but I think we came out learning a lot. Development on the system will continue without me, as Philip has decided that any further refinements would be extra credit. On the one hand, I would like to see the project move forward. I have had many ideas for functions that could be added to the current application. However, the extra credit points won't affect my grade and committing myself to extra work for nothing during finals week seems like a really stupid thing to do.

Our project is again located here. There, you can download a distribution of our application.

Tuesday, November 24, 2009

Code Review of the Ekolugical Carbonometer

Our latest task was to do a review of another group's web application. This blog is a review of the Ekolugical Carbonometer by BJ Peter DeLaCruz, Wahib Hanani, and Lyneth Peou. I purposefully did not include any of the suggestions made in class (use whitespace better, remove names at the top, and adding some information about what the system does). This should be a really short blog, right?

The review checklist I used to evaluate the system can be found here.

Review The Build

The system builds correctly without issues.

Review System Usage
  • The proper format of the date is not listed on the front page (i.e. yyyy-mm-dd).
  • If we put in a bad date, the error message should suffice (we don't need to see a table full of N/A's).
  • Inconsistent output, why is 9:00 red when 23:00 is yellow? The value at 23:00 is higher than the value at 9:00.
  • There appears to be an empty table cell at the bottom of the output.
  • The output has big bold letters compared to the "Enter a date" label and the error label, which looks really small. There should be some balance between the two.
  • I would prefer that there be a label that shows the day we're looking at.
  • Overall, the speed is slow with little feedback.
Javadocs
  • Thresholds.java has a public constructor, but the Javadoc says it's private.
Names
  • Some instance variables are named with all caps when they are not constants. This is especially noticeable in the Session class with the results lists.
  • List variables in the session should be named the other way around (i.e. todaysTimestamps instead of TIMESTAMPS_TODAY).
Testing
  • Coverage is outstanding as far as I can tell.
  • While WattDepotCommand is covered by testing the web application, I would like to see a separate test for it to make sure it works correctly. WattDepotCommand really should be a separate component (as suggested in the next section) and should have its own test.
Package design
  • WattDepotCommand is independent of Wicket and should be in a separate package.
Class design
  • Instance variables for the WattDepotCommand class should not be public. For the most part, you do not want some of these variables to be changed by other classes. Instead, they should be private with an appropriate getter. For example, noData should be private and a method like "hasNoData" should return the value. If you really want someone else to be able to modify noData, then you should have a setter "setNoData(boolean noData)" (EJS 71).
  • Also, why does WattDepotCommand have lists for the results? It doesn't seem to do anything with them.
Method design
  • When the values from WattDepot are added, there are three separate implementations of onComponentTag when a label is created. I suggest creating a class that extends Label and allows you to select a color to put into onComponentTag.
  • I also noticed that onComponentTag does not have a @Override tag associated with it, which it should to ensure you override the method.
  • WattDepotCommand#getCarbonContentData seems to work by appending to a passed in list. Thus, there is an assumption that the results parameter is an empty list. It seems that what you really want is to return a new list of 24 values, or at least one that matches up with the timestamps. The method would make more sense if you returned a new list of results instead of appending it to the results parameter. Alternatively, you can assert that the list is empty and throw an exception if it isn't. The former is the better option though.
  • In Timestamps#createTimestamps(timestamp, tstamp), there's a temp parameter that is set and incremented, but otherwise doesn't seem to be used at all. It should be removed.
Check for common look and feel
  • As in the review of the command line client, the code definitely looks consistent.
Review the documentation
  • Documentation looks good. One thing I would've liked is a link to WattDepot since you're obviously using that service to get your data.
Review Software ICU
  • Overall their stats look fine. The churn level seems to be on the increase, but it looks good otherwise.
Issue management
  • There is a relative lack of issues for this project. If I just go by the issues in the tracker, then only Lyneth appeared to do any coding and Wahib just wrote the user guide. I'm sure that's not actually the case though.
Review CI
  • There does not seem to be any commits from the 19th and the 20th. Also, the 21st has a period of 3 1/2 hours where the build was red. I recommend that you check with Hudson when you commit and make sure the build is not red.
Summary

I think you guys know what you need to do based on the discussion in class, so I won't repeat those. My additional suggestions are to remove unnecessary elements (empty table cells, extra table rows, table full of N/A's, reduce the font, etc) and use issue tracking more. Also, review the design of the code (naming, methods with side effects, etc). Obviously, you guys work well together, even if the Google Code issues doesn't reflect that. Learn to use it so that if someone else comes along, they can jump in and know what's going on.

Sunday, November 22, 2009

Wicket (Not the Ewok)

This week in software engineering (TWISE?), we were tasked with creating a web application that displays carbon data from WattDepot. I do some Rails programming, so this should be easy right? Wait, we have to use Java? Well I have some experience in Tomcat, JSP, and some JSF. Wait, Apache Wicket? Well, if it's a web framework, maybe it's like the others I've used. But as I quickly found out, it is quite different.

As a little background, I'm quite comfortable working with Javascript and HTML. I know my way around the Prototype and Scriptaculous libraries and wrote a simple web application that uses just HTML and Javascript. I think that HTML and Javascript go together almost as much as HTML/CSS. If you're a web designer, you need HTML and CSS. If you're a web programmer, you need HTML and Javascript.

So Dean, Aaron, Yichi, and I were tasked with making our web app E hoʻomaluō (means "to conserve" in Hawaiian). And we wanted to make a cool Web 2.0 application with all the bells and whistles. And I think we're mostly there with some small caveats. But the most difficult thing I had to get used to was creating this AJAX web app without writing a single line of Javascript. I think a Wicket method took a Javascript event handler as a parameter (i.e. onchange or onblur) and that was the closest I got to Javascript.

I have to give props to what Wicket does. They totally took out the code from the HTML file (Javascript, JSP tags, etc) and abstracted it out to Java. My Index.html file reflects this. This is an AJAX application without any Javascript in the HTML file. There are additional HTML ids that correspond to Wicket identifiers, but that's it. This is in stark contrast to Rails (with RHTML tags), PHP, and JSP.

So as you might imagine, I was a little bit confused at first. I knew what I wanted to do in Javascript terms (on submit, update this div, assign classes to the tags so that the CSS can do its stuff, etc). I just had to figure out how it worked in Wicket. And as Yichi can attest to, it was a source of frustration.

But in the end, the application was completed. Since Dean was off island, he took care of some administrative tasks (setting up the Google Code project, Hackystat, and doing some documentation). Yichi worked on the class that gets the carbon content from WattDepot. Aaron and I joined forces to take on Wicket. Both of us had annoying issues (Wicket and non-Wicket related), but I think we worked through it quite well. For the most part, we met during the week briefly to catch up on what each other has been doing. Dean has also been in contact with us through email. He mentioned that he'd be without a reliable internet connection, but he got a lot more done than I thought he would.


Here's our Software ICU data. Unfortunately, our coverage isn't quite as good as I'd like. The fact that our application is all AJAX makes testing slightly more difficult. There are WicketTester methods to deal with it and Aaron is looking into it as I write this entry.



You can download the source and the executable for E hoʻomaluō here. Consult the user guide and the developer guide for setting up the application.

Sunday, November 15, 2009

Version 2.0

This was a busy week in ICS 613. Over the past week, we had to do a new version of our command line interface for Watt Depot. This meant that we had to:
  • Improve our original implementation based on the feedback in the code review.
  • Install and use Hackystat and SoftwareICU to get a visualization of our group process.
  • Implement new commands.
  • Answer a few Watt Depot questions (task B) from Philip.
So, we basically took those 4 things in order. First, we needed to improve our original implementation. Yichi worked on making his code more readable. He also added better error messages. I had a clear vision on how I wanted to refactor the help command, so I spent a lot of time doing that. I added an abstract method that each command had to implement that printed out usage information. Then, I had the help command just collect all of the help strings and print them out. I also made sure that the commands conformed to the new 2.0 specification. Because the first word in the command uniquely identifies it, I simplified the CommandProcessor to just look at the first word.

The second step was to use Hackystat and SoftwareICU to visualize our group process. The setup was fairly tricky, but we eventually got it to work. I did update the Hackystat library and forgot to tell Yichi, but we resolved that fairly quickly.

The third step was to implement the new commands. This was pretty straightforward. I implemented one command while Yichi implemented two. I changed a few things in Yichi's code (I had helped a classmate work on one of the commands that Yichi was doing), but it was good for the most part.

Before I get to the questions posed by Philip, overall I'd say I was pretty pleased with the way the project went. Our design is pretty solid and our test coverage is 91% (a mere 1-2% increase over 1.0). There are some minor issues that were reported by the reviewers that we did not quite get to. The help refactoring meant that we could provide better error messages, but as of this writing we did not redo all of the commands.

Yichi and I did not meet regularly. We mainly communicated via email, which I think worked for us. I might've spent more time doing refactoring, but I guess I had the design in mind that I wanted to impelement. However, Yichi did implement 2 commands versus my one, so I guess the work was equally partitioned even if the time spent wasn't.

I'm not sure how I feel about the SoftwareICU. On the one hand, it's neat to visualize our project health. However, this revision was not a good time to get introduced to the SoftwareICU mostly because some aspects felt out of our control. For instance, many people had to refactor and/or rewrite parts of their code based on reviews. Thus, the churn reading on the SoftwareICU will probably be high for most of us. As for complexity, many students may have started out with a poor implementation and will be stuck with a high complexity reading unless they rewrite most of their code (although, I guess the implementation wasn't too complex yet). It'll be interesting to use the SoftwareICU on our next assignment, since we'll be starting with a clean slate.


Cue the vital sign beep.
So, this brings us to the questions posed to us by Philip about WattDepot. To answer them, I decided to go ahead and implement two additional commands ("energystats" and "carbonstats") to easily answer this question. I hadn't done much coding over the weekend due to illness, so it felt good getting back into it.

Robert had mentioned to us that the energy consumed by SIM_OAHU_GRID is zero for all dates since the data only represents power plants. Instead, energystats calculates the energy generated by the grid. Getting the hourly data takes a while on my internet connection, probably because it's a lot of data to gather. I split it up into 10 day increments to hopefully make a little more reliable.

>energystats generated SIM_OAHU_GRID from 2009-11-01 to 2009-11-10 hourly
Max: 951.0 MW at 2009-11-02T20:00:00.000-10:00
Min: 497.5 MW at 2009-11-02T04:00:00.000-10:00
Average: 606.3 MW

>energystats generated SIM_OAHU_GRID from 2009-11-11 to 2009-11-20 hourly
Max: 951.0 MWh at 2009-11-11T20:00:00.000-10:00
Min: 497.5 MWh at 2009-11-11T04:00:00.000-10:00
Average: 609.4 MWh

>energystats generated SIM_OAHU_GRID from 2009-11-21 to 2009-11-30 hourly
Max: 951.0 MWh at 2009-11-23T20:00:00.000-10:00
Min: 497.5 MWh at 2009-11-23T04:00:00.000-10:00
Average: 603.1 MWh

So there seems to be at least a 3 way tie for max and min, although it might happen more regularly than that. Let's look at the daily data.

>energystats generated SIM_OAHU_GRID from 2009-11-01 to 2009-11-30 daily
Max: 14764.0 MWh at 2009-11-03T00:00:00.000-10:00
Min: 14089.0 MWh at 2009-11-02T00:00:00.000-10:00
Average: 14571.1 MWh

If only you could see me twiddle my thumbs while these commands do their thing. Finally, here's the carbon emitted statistics.

>carbonstats SIM_OAHU_GRID from 2009-11-01 to 2009-11-30
Max: 29959472.0 lbs at 2009-11-04T00:00:00.000-10:00
Min: 22908808.0 lbs at 2009-11-07T00:00:00.000-10:00
Average: 27141379.3 lbs

And that wraps it up for the command line client. We're implementing a web app in the coming weeks, so stay tuned!

Wednesday, November 11, 2009

We Want Your Feedback!

This week in class, we did code reviews on other implementations of the WattDepot Command Line Interface. To be honest, most of the code reviews I've been involved with were other people reading my code. For the first time in a long time, I'm reading other people's code to try and find ways they can improve it. At the same time, my code was reviewed by others in the class.

Some of the criticisms of our code were warranted. In some ways, our program did not provide enough feedback. Aaron Herres and Dean Kim provided command usage information when they reported an error, which is something we really need to do. Some of the classes also did not provide enough information about what occurred. Dean also pointed out something in the specs that we did not cover. As far as functionality goes, that was the main issue other than BJ pointing out that something didn't quite line up as stated in the help command. We do need to do a better job of common look and feel as well. I'm not sure what the best approach is, since I rarely look at what Yichi does. This seems to imply that we should be doing our own little code reviews on the side before the actual due date.

Some of the other issues that came up were non-issues though. Some commented about the length of the file names, which is more a part of my naming convention more than anything else. Some suggested that the contents of the help file be moved to a separate file. While a good suggestion, I had a different idea (have each command implement a help string and then have the help command just collect them all). But there was also a fair amount of praise, which kind of surprised me since I simply followed Philip's recommendations over the weekend. I could say that the code structure was all our idea, but that would be a lie.

But the more interesting experience was reading other people's code. BJ, Wahib, and Lyneth's implementation was pretty solid in terms of functionality. They also followed Philip's recommendations, yet their organization was different from ours. Aaron and Dean followed some of Philip's recommendations, but they didn't go all the way with it in terms of packages and the like. Neither team was very comprehensive with their tests either. Yichi and I were both surprised when we looked at our coverage report, since we weren't even shooting for high coverage (ours is about 89%).

All in all, the feedback was welcome. The entire process has me thinking of better ways to organize our code. Well, it has me thinking about that more than usual.

Sunday, November 8, 2009

Reviewing Code Part 2

In this second part, I'll be reviewing code written by Aaron Herres and Dean Kim.

A. Review the build.
Downloaded and ran 'ant -f verify.build.xml'. Seems to run fine.
B. Review system usage
To start the system, it requires the WattDepot URL. The current service could be used as a default.
Got a stack dump when getting the summary for SIM_WAIAU.
>list source SIM_WAIAU summary
Exception in thread "main" java.util.NoSuchElementException
at java.util.AbstractList$Itr.next(AbstractList.java:350)
at java.util.Collections.min(Collections.java:570)
at org.wattdepot.cli.ListSourceCommandCli.getSourceEarliestDataTimestamp
(ListSourceCommandCli.java:374)
at org.wattdepot.cli.ListSourceCommandCli.processSource
(ListSourceCommandCli.java:453)
at org.wattdepot.cli.ListSourceCommandCli.processCommand
(ListSourceCommandCli.java:487)
at org.wattdepot.cli.ListCommandCli.processCommand(ListCommandCli.java:76)
at org.wattdepot.cli.CommandLineInterface.processMainCommand
(CommandLineInterface.java:209)
at org.wattdepot.cli.CommandLineInterface.processUserInput
(CommandLineInterface.java:172)
at org.wattdepot.cli.CommandLineInterface.main
(CommandLineInterface.java:269)
I also tried the following list commands, but it said they were invalid.
>list sensordata SIM_WAIAU timestamp 2009-11-01
The input string was invalid.
>list sensordata SIM_WAIAU day 2009-11-01
The input string was invalid.
I tried the chart command, but got the following results:
>chart power
chart power error. Usage: chartpower [generated|consumed] {source} {startday} {endday} sampling-interval {minutes}
>chart power generated SIM_KAHE 2009-11-01 2009-11-02 sampling-interval 120 file test.html
No power generated values returned for source
Note that the usage string says "chartpower". SIM_KAHE might not generate power, so I tried to check it:
>list powerGenerated SIM_KAHE day 2009-11-01 sampling-interval 120 statistic max
Exception in thread "main" java.lang.NullPointerException
at org.wattdepot.cli.CommandLineInterface.processUserInput
(CommandLineInterface.java:173)
at org.wattdepot.cli.CommandLineInterface.main(CommandLineInterface.java:269)
Finally, I used the list carbon|energy command:
>list total energy SIM_KAHE day 2009-11-01 sampling-interval 120
list total error. Usage: list total [carbon|energy] generated {source} day {day} sampling-interval {minutes}
>list total energy generated SIM_KAHE day 2009-11-01 sampling-interval 120
1.5854300700617285E9
The first command is the syntax listed in the help command, so that should be changed.
C. Review the JavaDocs
The system summary looks good. The package summary could be improved since the system has changed from the initial simple example.
Interesting that you are not required to put Javadocs for some defined constants (namely in ListSourceCommandCli). I have no problem with it as long as the name of the constant is descriptive enough (SOURCES_COMMAND is not very descriptive).
Method summaries are good enough as far as I can tell.
D. Review the names
I have a minor issue with Cli (or CLi, which is probably a typo) being appended to the end of class names. Seems unnecessary (ListSourceCommandCommandLineInterface?).
ListSensorData#toCLIOutput should be named "toCliOutput" (only the first letter in an acronym should be capitalized).
E. Review the tests
Some classes are not tested at all (ChartPowerCommandCli, ListTotalCommandCli, ListPower, and ListSensorData).
ListSourceCommand seems to be only partially tested. It doesn't seem to get sources with subsources.
F. Package design
All of the files are in one package. Seems like there should be at least a separate package for the command implementations.
G. Review the class design
Some of the command implementations include a fair amount of duplication. For example, ChartPowerCommandCli and ListTotalCommandCli have similar verify and timestamp generation commands. These could be moved to a helper class that is used by the command classes to verify and parse input.
H. Review the method design
Some of the methods in the command implementations are listed as protected when they could be made private.
The system handles exceptions by printing stack traces. It should handle the exception and present a clean message to the user about what happened instead of a generic error.
I. Check for common look and feel
It is somewhat noticeable that two people worked on this. Some command implementations have protected methods while others have private methods that are hidden from the user.

Wednesday, November 4, 2009

Reviewing Code Part 1

Our latest thrilling assignment in software engineering is reviewing our classmate's implementations of the Watt Depot Command Line Interface. One of my instructors once told me a long time ago that it's surprisingly easy to catch someone cheating even if they change their name and a few variables. Everyone has their own style of coding. I'm looking forward to seeing how others wrote their programs. This blog will review an implementation written by BJ Peter DeLaCruz, Wahib Hanani, and Lyneth Peou. A later blog will be written for the rest of the members of the class.
I'll be following this review checklist kindly provided to us by Philip Johnson.
A. Review the build.
Downloaded and ran 'ant -f verify.build.xml'. Seems to run fine.
B. Review system usage
I noticed a minor issue with the following command:
> list summary foo
There is no data for foo.
Error encountered when processing source name.
Seems that there are two error messages. Foo is an invalid source, so it should be an error instead of reporting that there is no data.
I also tried the following command:
> chart power generated foo 2009-11-01 2009-11-01 sampling-interval 120 file test.html
Data was successfully written to test.html.
Looks okay, but here's the output:
There probably should be a check to see if the day is equal. It also does not seem to be able to handle my bad source name.
C. Review the JavaDocs
The system summary and the package summaries look good to me.
Note that all command classes inherit the main method of CommandLineInterface according to the Javadoc.
In AbstractTestCommandLineInterface, the Javadoc seems to imply that the class contains test cases when it does not.
I noticed that TestSourceSummaryInformation checks that the number of subsources for "SIM_OAHU_GRID" is four. Because the WattDepot service is still in development, things might change on the server. If a subsource is added to SIM_OAHU_GRID on the server (i.e. SIM_OAHU_WIND), this test might fail.
D. Review the names
AbstractCommandLineInterface is not an abstract class, but instead is an interface. Perhaps it would be better named as "CliCommandInterface".
The parser method of CommandProcessor should be named "parse", as parser is a noun (EJS 23).
CommandLineInterface defines some constants like "SOURCE_MESSAGE" and "STRING_MESSAGE" for error messages. They could be more descriptive to say that they are errors (i.e. "SOURCE_ERROR" or "CONVERSION_ERROR").
The names of the command classes could be changed to better match the commands. The commands tend to start with "list" or "chart", so they could be named to match up better with the actual command (i.e. ListPowerDayStatistic or ListPowerTimestamp).
E. Review the tests
For this section, I ran their emma.build.xml, which uses the Emma build tool to check test coverage.
Getting the chart for power consumed does not seem to get tested.
SourceInformation, SourceListing, and SourcePowerGenerated are not covered at all. They don't seem to have any tests (questionable if Help needs a test).
SourceSummaryInformation for sources with properties does not seem to get tested.
Many tests test for bad input, but they don't seem to cover "good" input.
Also, the tests do print out a lot of information by default. Perhaps they could be toggled with a system property.
F. Package design
AbstractCommandLineInterface is in a different package from the classes that implement it.
G. Review the class design
In the CommandLineInterface, there are several constants defined, but they are not used by the CommandLineInterface. Perhaps these constants and the AbstractCommandLineInterface could be combined into an abstract class.
The main method in CommandLineInterface does a lot of input handling that could be handled in CommandProcessor. Most of it probably shouldn't be in the main method.
Be wary of public instance variables. isDebugging probably should not be changed by other classes. A "debug" instance variable and a isDebugging() getter would be better. Not sure what isError is used for, but a similar approach can be used (perhaps with "setError(boolean error)" if needed).
The command classes have methods other than doCommand() that are also publicly accessible and they tend to have similar arguments. Should a user be able to invoke some of the commands independent of doCommand()? Those methods should probably be made private.
Note that the Help class overrides CARRIAGE_RETURN when it doesn't need to.
H. Review the method design
In CommandLineInterface#main, the last continue statement is not needed at all. Again, most of the conditions should be handled in CommandProcessor.
Chart#chartPowerToHTML might be better implemented as two methods, one that creates the Google Chart and another that writes the file.
I. Check for common look and feel
Seems like the code was pretty uniform. I couldn't really see any discernible differences between the implementations.
Phew, I think I'm done now. Check back later for part 2!

Tuesday, November 3, 2009

Lucky 13

I think that there isn't enough group work in computer science classes. Granted, I've taken most of my classes here at UH, so I don't know how other schools are. But it seems like we have students who graduate and think they can hack it out themselves when 90% of the time they need to work in a team of other programmers. Maybe it's my limited view, but I hope more and more students learn how to work in groups. It also helps if the students have access to code repositories and maybe even continuous integration.

As part of working on a command line interface for Watt Depot, we learned about CI. More importantly though, we started working with one or two group partners and applied the all of the tools we learned about in class to complete this assignment. My partner for this project was Yichi Xu and our group name was "umikumakolu" which translates to 13 in Hawaiian. We decided to split the 10 commands in half where we each implement 5. I did take some of the harder ones, but I had few issues implementing them.

At first, we got to a rocky start. Because we were modifying the same source file, we encountered Subversion merge conflicts constantly. After a few pointers from Philip though, we refactored the code to separate the commands to individual files. In hindsight, our initial design was pretty bad, if not terrible. Because each command ended up in a separate file, we saw few merge conflicts after that. We later refactored to follow some design patterns suggested by Philip. Honestly, I don't know very many design patterns (singleton and now dispatch tables). I should read up on that more because these design patterns simplify my code and may even make it more stable.

I applied test driven development to create a few of the commands. Hence, my tests are somewhat comprehensive where they test for all sorts of bad input. I did find that the tests sometimes took a while because they needed to make multiple requests to the Watt Depot server. Then again, that was when all of the tests were in one file, so I had to run all of the tests each time I made a small change. With the program's current structure though, I think TDD would go a lot more smoothly.

I'm happy to say that we completed the commands as outlined here. I uploaded our distribution to the WattDepot-CLI project, which you can find here.

Sunday, November 1, 2009

Everything's Shiny Captain

I mentioned before that one reason why having a code repository is great is because we can always go back to a stable state. Things become more complicated if other users are committing code as well. How do you know if the code in the repository is stable? And if it's not, how do you know whose change to roll back?

During my time in the LILT lab, I quickly became familiar with CruiseControl, which is a Continuous Integration (CI) tool. Whenever we committed code, the unit, functional, and acceptance tests would all run. And when everything was successful (cause I always checked my code... not) it would be green. And if someone broke something, it would be red. I actually was kind of afraid to commit code at times, since I was afraid of breaking the build and having the blame log be pointed at me (who was still very new to this whole programming thing). But I grew to accept it and became more comfortable using it. And I now understand how important it was to the overall health of the project.

In our software engineering class, we are creating a command line interface for WattDepot, which is part of a research project here at UH. This gives us the chance to apply a lot of the things we've learned over the past few weeks (automated QA, code repositories, and style guidelines) while working with fellow classmates. We were also introduced to Hudson, which is another CI tool that integrates with tools we already use in class (namely JUnit and Ant). It was pretty easy to set up so that it checked for a new commit every 5 minutes. At first, it failed the build since the code did not pass Checkstyle. Once that was fixed though, the weather cleared up and we haven't failed builds since. Make sure you run verify before committing!

So as Miss Kaylee Frye from Firefly/Serenity would say...

Sunday, October 18, 2009

The Kind of Tests We Don't Like

We have a midterm coming up on Wednesday, October 21st. Our assignment this week is to come up with some questions for the exam. Here are my 10 questions.

A Java class called "MyDate" has a method "getDay" that takes an integer and returns a string representation of the day of the week (e.g. getDay(0) = "Sunday", getDay(5) = "Friday"). If the day number is not valid, "getDay" returns null.

1. What are the equivalence partitions for "getDay"?

There are 3 partitions for this method. Let dayNum be the variable that is passed in to getDay(). Then the partitions are (a) dayNum <> 6.

2. Write a unit test for "getDay".

/**
* Tests the getDay method.
*/
@Test
public void testGetDay() {
  
int dayNum = -1;
  
assertNull("Testing with dayNum < 0", getDay(dayNum));
  
dayNum = 0;
  
assertEquals("Testing valid day", getDay(dayNum), "Sunday");
  
dayNum = 7;
  
assertNull("Testing with dayNum > 0", getDay(dayNum));
}


3. According to Richard Gabriel in "The Poetry of Programming", how is writing poetry similar to writing code?
Richard Gabriel states that his mind is in a "particular place" when he is writing either code or poetry. He is thinking about the possibilities and directions as he is writing.

4. Bob has some code in a Subversion code repository. What svn commands would I use to (a) get a fresh copy of his code, (b) make sure I have the latest revision, and (c) submit my code changes?

(a) "checkout", (b) "update" (or "status --show-updates"), and (c) "commit".

5. Name a reason why you might want to use a Java collections class instead of an array class.

Fairly open ended question. One good reason is that you might need a set or a hash table. These are implemented as collections.

6. What is Linus' Law and how does it apply to the "bazaar" style of development?

Linus' Law is "Given enough eyeballs, all bugs are shallow". This applies to the bazaar style of development because problems will be found quickly and the fix may be obvious to someone in the bazaar.

7. Why is measuring code quality by test coverage a bad idea?

One reason is that the test coverage report does not show if all possible paths through the code are taken.

Questions 8-10 deal with the following implementation of onScannedRobot:

/**
* Event that is thrown when an enemy is detected.
*/
public void onScannedRobot(ScannedRobotEvent event) {
  
String currentEnemy = null;
  
double bearing = event.getBearing();
  
if(event.getName() == currentEnemy) fire(3);
}


8. Name a violation that would be found by Checkstyle.

There are two issues (as far as I know). The obvious one is that the if statement is inline and not a block statement. The other is that there is no @param tag for event.

9. Name a violation that would be found by PMD.

PMD would notice that bearing is set but never used. It might also catch an error found by Checkstyle.

10. Name a violation that would be found by Findbugs.

Findbugs would find the NullPointerException that this code will probably throw when it is run.

Tuesday, October 13, 2009

Safety Net

It's hard to imagine how people got by without configuration management systems. Without them, an accidental commit to a code repository could blow the entire project. At the very least, it would take more time to go and fix this mistake. And the programmer who committed the bug might be looking for a new job.

I'm a fan of configuration management systems like Subversion and Git. In a team environment, it seems to be a necessity. But it can even be useful for individuals like ICS students working on assignments and projects. We've all introduced bugs in a program and had to spend time fixing them. Students can use the repository to make sure they always have a stable copy of their code to go back to if they mess something up badly. It's akin to how some computer games allow you to press a key and save your progress at any time. With that safety net, players will often save after every major event so that they don't have to go back and do it again. You can take it to an extreme where you save after every enemy you defeat. After all, you're still alive, so you must've done something right. It would be interesting to see if this kind of behavior would emerge if we didn't teach students good "commit etiquette". Would the students commit their work consistently or just not use the service at all?

In class, we had an introduction into Google Project Hosting and Subversion. Google Project Hosting is a great service for open source projects. It provides a central place for committing and managing code without having someone create their own server. But it also has features like issue tracking, bug reporting, and wiki pages. Best of all, it's free as long as your project is open source.

I set up a project for my Robocode robot Menehune. It was fairly easy to create a project, add wiki pages, and commit my code. I also added two classmates as committers to the project. This does mean they can sabotage my robot, but at least Subversion lets me roll back changes. I was also able to create a Google Group for the project. To create the group, I had to go to groups.google.com and create it there, then add it to the project. I was surprised that there wasn't a link on the project management page to create a new group, but I figured it out. There was a feature where we could receive message postings and emails when someone commits code to the project through Google Groups. Apparently, they're rolling out a new way to do this, so we aren't able to add this feature for the time being.

The Menehune Project page

Overall, we had a brief introduction into configuration management systems and Google Project Hosting. The main project in our class is coming up, so I think we'll become much more familiar with both of these things in the next few weeks.

Tuesday, October 6, 2009

I Love Tests

I don't think you usually hear a college student say that. I love tests. But ask any software developer and they'll probably give you the same response. Well-written tests not only makes sure that your code is working well, but that it'll be working well in the future. And there's Test Driven Development (TDD), where we write a failing test first and then write the code needed to pass the test. I had the opportunity to hear Kent Beck (co-creator of JUnit) speak about TDD and JUnit. He mentioned that TDD seemed like such a silly idea; that you write code that doesn't work and then have to write more code just to fix it. Yet they used TDD when creating JUnit and found that they were more productive. If you're not a believer, maybe you need to take Software Engineering at UH Manoa.

I always liked tests, but relearning about them in ICS 613 reignited my passion. So much so that I wrote a few unit tests for my iPhone application (Apple calls them "logic tests") and did some TDD to add a new feature. So I dove into writing a few tests for my Robocode project. I had ideas on what tests I could run, but as I said in my previous blog, things rarely go as planned.

I had the idea that I would create unit tests that test the event handlers for my Menehune robot (OnScannedRobot and OnHitByBullet). I'd just create a bogus event and test that certain properties of Menehune were set. The difficulty was that if we instantiate our Menehune robot, we need to run the run method to access certain properties of the robot. Unfortunately, the run method is typically an infinite loop. In the OnHitByBullet test, I could get by without having any exceptions thrown. As for OnScannedRobot, I did a check that the exception is thrown (the test fails if it isn't thrown) and then check for some internal properties. To do that I had to move my code around so that movement and firing decisions came later.
  public void testOnScannedRobot() {
    
// Initialize an event with bogus values.
    
ScannedRobotEvent event = new ScannedRobotEvent("Foo", 100, 0, 10, 0, 0);
    
try {
      robot.onScannedRobot
(event);
      
fail("Should have thrown exception where we attempted to move or fire.");
    
}
    
catch (RobotException e) {
      assertEquals
("Testing scanned robot name.", event.getName(), robot.getScannedRobotName());
      
assertEquals("Testing robot mode.", Menehune.RobotMode.FIRE_MODE, robot.getMode());
    
}
  }



Example where I fail if an exception is not thrown.

Next, I needed to test my robot's behavior. We were provided with Philip Johnson's RobotTestBed code that provided a test box for our robots. We could check the conditions of the robots after each turn, round, or battle. I had the idea where I'd access methods in my Menehune robot to make sure that it was behaving properly. Yet I was foiled again! We can't actually access instances of our robot. We just have a generic overview (snapshots in their case) of where the robots are at a given time. Thus, to determine if my robot was behaving properly, I tracked its movement when it encountered an enemy. That way I could check that the robot tracked down an enemy and then repositioned itself.

Tests against other enemies went fairly smoothly though. I gave myself a fairly conservative win percentage against RamFire, but I just wanted to make sure that I win most of the time.

I then ran a code coverage tool (Emma) to see how well my tests cover my robot. Overall it was pretty good. I used an enum type to represent the different modes of my robot. Emma got me for not testing all of the methods for an enumerated type, which I think is kind of bogus. My robot also apparently didn't turn in certain directions, so those lines were missed. Finally, I only tested my robot against one enemy, so the line that skips the enemy if we're not tracking it is not covered. Further tests might cover some of these aspects, but I'd say it's pretty good coverage otherwise.

So what if I never called RobotMode.valueOf()?

Overall, I can say that my love for tests are back. If we spend more time developing and improving our robots, I might be inclined to write a few more. I love tests. Exams, not so much.

You can download my robot here.

Tuesday, September 29, 2009

Find My Bugs

Update: Added new distribution for JUnit lab.
No matter how much preparation or thinking you've done, programming almost never really goes smoothly. It surprises me when something I wrote doesn't crash and burn on the first try, because I expect there to be something wrong. Rarely am I surprised when I see the words "NullPointerException" on my screen. And it's usually small mistakes that I'm sure thousands upon thousands of programmers have made before me.
One of the things that amaze me about Java is the variety of libraries and tools that are out there. Tools like Checkstyle, PMD, and Findbugs are unique to Java as far as I know. And I've found them to be useful too. PMD and Findbugs can catch most of those silly errors that every programmer makes. And if we were working on a larger project, those silly errors become much harder to track down. And I can see the use of Checkstyle for enforcing style and documentation in large projects. For small projects and assignments where the code size is small, it's a little harder to see the use.
The thing about these automated QA tools is that people tend not to use them if it's not easy for them to do so. I've been using the Eclipse IDE for so long that I don't even remember what the proper invocation is to compile all of my Java sources on the command line. And invoking the above tools via the command line would be a pain. Fortunately, we have build systems that help make these things easier. While we'd use Make in most cases, Apache Ant seems to be the most recommended system for Java programming. Ant also has support for all of our QA tools, making it much easier for us to use.
So after learning about build systems and automated QA in class, we applied them to our little Robocode robots. For the most part, I feel that I am a stickler for style. However, Checkstyle caught an instance where I missed a space in an if statement and where I didn't end a Javadoc sentence with a period. I also neglected to put a package.html file, which I imagine most of the students in the class also forgot to do. PMD caught a statement in my Robot class where I had done "if (x != y) {...} else {...}" when I should've made it more clear. Findbugs had no issues with my code. So my issues were pretty much documentation and readability issues. Again, for a small project like ours, it is a bit of an annoyance. But for a larger project, I can definitely see the importance if others will be looking and/or using our code.

How do you like my style? Not so much I guess.


Okay, that is a bit confusing.

We used the Ant files provided to us by Philip's DaCruzer Robot distribution. Thank goodness too, because putting these Ant files together must be a lot of work. Fortunately, they're generic enough to be applied to any Java project as long as the proper entries are edited and/or removed. After I had edited the Ant files, it was rather easy for me to run the tools on my robot and make them eventually pass "ant -f verify.build.xml", which runs all of the tools and fails if any of the tools report issues. So we've gotten our crash course in build systems and automated QA. Now it's time for testing!
You can download my Menehune distribution here.

Sunday, September 20, 2009

You Can Find Your Robot in the Junkyard

*Disclaimer: This blog entry is not too be taken too seriously.
That's right, I'm talking trash already. After all the practice assignments, it's time for our Robocode tournament. My mighty Menehune bot is taking home the gold.
Let's go over the strategies:
- Movement
So to take down the sample robots, my robot has two movement strategies based on whether or not the enemy is stationary. If the enemy is stationary, Menehune moves up and down (similar to walls) and fires when it sees the robot. If the enemy is too close, Menehune moves away before going into the vertical pattern.
If the enemy is moving, Menehune is going to follow it. Movement is similar to the Tracker robot.
If we hit another enemy, Menehune assumes the robot is not stationary (it would've moved away from it initially). Menehune moves away a bit before tracking it again.
- Targeting
Targeting is straightforward. If the enemy is stationary, Menehune just points the radar left or right (depending on where the enemy is). If the robot is moving, Menehune applies Tracker's method of turning the gun and pointing the radar at the enemy.
There's a counter built in that tracks how long its been since Menehune last saw the enemy. If Menehune hasn't seen the enemy in a while, Menehune will stop to reacquire it.
- Firing
If the enemy is stationary, Menehune can fire a full power bullet without fear of it missing. Otherwise, Menehune gets close before firing the bullet. Menehune shoots a weaker bullet if the enemy is a bit far away. Also, if the enemy hits Menehune, it retaliates by shooting back before moving away.
-------------------------------
Of course, the strategies look good on paper, but how do they fare against the team in the sample package?
- Corners
Menehune tracks Corners as it tries to move into a corner of the map. Once Corners stops moving, Menehune just moves up and down and shoots a full power bullet.
Menehune does get caught in situations where it moves directly above the corner robot. Not as clear cut of a winner, but it still wins more often than not.
Winner: Menehune
- Crazy
Crazy obviously moves a lot. Menehune does its best to track down. More often than not though, Crazy gets a little dizzy and gets disabled, making it easy to pick off.
Winner: Menehune
- Fire
Because Fire doesn't move at all, it's easy to kill using the stationary robot strategy. No contest.
Winner: Menehune
- RamFire
The first real test. Unfortunately, Menehune just can't get away from RamFire. We still win from time to time, but RamFire kills more often than not.
Winner: RamFire
- SpinBot
SpinBot's movement makes it really difficult to follow and track. Menehune does its best, but it always bumps into SpinBot. SpinBot always fires at full blast too, making it deal a lot of damage as Menehune gets in. Sometimes the matches are pretty even, but SpinBot does have an edge.
Winner: SpinBot
- Tracker
Menehune uses a lot of Tracker code, so one would think that it'd be pretty even. Unfortunately, Tracker registers as a stationary bot for a little while and gets in a few shots before we start tracking it down. Menehune doesn't put up much of a fight.
Winner: Tracker
- Walls
Walls just dominates Menehune. The tracking just can't keep up with Walls' movement. In essence, Menehune moves to a spot where Walls had long since passed.
Winner: Walls
Overall: 4-3 Sample bots (didn't count SittingDuck, cause that's a freebie).
-------------------------------
By now, you probably realize that all of the intro stuff was just talk and that there's nothing fancy going on here. Tracking an enemy using the base Robot class is extremely difficult. Given a version 2.0, maybe we can use the AdvancedRobot class to move around the field better. Comments in the Tracker code allude to this as well. It'll be interesting to see how it fares in the class tournament. Let's see what you got!
Download my bot here.

Tuesday, September 15, 2009

Imitation is the Sincerest Form of Flattery

Our simple Robocode robots last week gave us an introduction into Robocode. Our last step before creating a competitive robot is to review some of the sample robots to give us some ideas for movement, tracking, and firing strategies.

Walls:
The movement for walls is very simple. It turns to one of the cardinal directions (0, 90, 180, or 270 degrees) and moves all the way to the wall. Once there, it moves around the battlefield along the walls. What's unique is the use of the "peek" variable that stops the robot from moving if it finds an enemy. Otherwise, targeting and firing is pretty straightforward. The gun always points into the field as the robot moves around.

RamFire:
RamFire has an interesting strategy. As far as the run loop goes, it simply turns around looking for an enemy. Once it finds a robot though, RamFire attempts to hit the robot by moving in a straight line 5 pixels past the event's location and then scans the field if it misses. The neat thing it does is that it scales the power of the bullet based on the remaining life of the robot. The purpose is that there are bonus points for destroying the robot by ramming into it. Thus, it wants to weaken the enemy to the point where ramming into it kills it.

SpinBot:
SpinBot moves in a circle a lot. What's interesting to me is that it does so without any complicated equations. In my simple robot implementation, I calculated the points along the circle and moved to those points. What this does is it sets up its turn to some value, sets a max velocity, and moves. The robot does extend AdvancedRobot, so that's why I didn't see the setTurnRight() and setMaxVelocity() in the Robot API. Targeting and firing are really basic. The radar turns with the robot and the robot fires if it finds an enemy.

Crazy:
After using Crazy as a test robot for a while, I was surprised by its implementation. You'd think that it's very random, but it isn't. It moves in arcs, first a 90 degree arc to the right, then a 180 degree arc to the left, and finally another 180 degree arc to the right. The robot simply reverses direction if it hits a wall. If you slow it down, you see the arcs that Crazy moves in. These arc movements are something to consider when making a competitive robot. They seem to use the AdvancedRobot class as well. Other than that, targeting and firing are basic. The radar turns with the robot and the gun fires if an enemy is found.

Fire:
For the most part, this robot is identical to our Firing01. It sits there, rotates the gun, and fires at scanned enemies. They did move the robot perpendicular to the incoming bullet when it gets hit, so it's not a total sitting duck. The robot also uses bullet power depending on how much energy the robot currently has. Finally, the robot fires hard at any robot that may run into it.

SittingDuck:
I imagine most would be surprised at the implementation of SittingDuck. For a robot that does nothing, there's a lot of code! All that code is there though to maintain a persistent round count so that the SittingDuck can report how long it's been sitting still. So as far as movement, targeting, and firing are concerned, there is nothing to see here. But there might be something with persistency. Maybe you can record the first enemy to die and pick on it every round since it might be weak.

Corners:
Pretty basic movement strategy again. It just tucks itself in the corner. That's a pretty smart strategy though, as you now only have to rotate your gun 90 degrees to cover the entire field. On the other hand, it is still in effect a sitting duck. Once a robot targets it, it can be taken out fairly easily. A better strategy might be to sit in the corner until you get hit, then move to another corner.

Tracker:
This is another robot that is somewhat similar in concept to one of our sample robots. All it does in its run loop is search for an enemy. Once an enemy is found though, Tracker moves close to it and fires when it's at a certain distance from the enemy. The way Tracker implemented tracking an enemy might be better than my current implementation. Something to consider if I decide to use a similar "hunt and destroy" technique.

Sunday, September 13, 2009

Care to Comment?

I'm not sure the importance of formatting and comments are really beaten into your head as you go through computer science classes. In beginning classes, there isn't a whole lot of need for formatting and comments. Most assignments are typically "correct" or "incorrect". Of course, if the output of your assignment is incorrect, you may still hope for some partial credit. Then a poor TA has to read your code and decide what your grade should be based on what they could understand. In later classes, professors kindly ask that your code be commented because they need to read it. That's the point where your code is no longer some trivial assignment but more of a project.

And readability in large projects is important because they tend to have many people working on the same code base. One person may be using some methods that another has written. People who maintain the project may be working on code that someone else had written in order to fix bugs found after release. And even if there aren't multiple people working on a project, readability helps when you look at older code. It's kind of interesting to go back and look at old code that you had written. You probably don't even remember writing it or what it does. But if your code is readable, you quickly get up to speed with what you had written.

I think we're fairly good with formatting since good style is taught as you learn how to code. But I don't think we're really taught how to write good comments. Instructors and professors only ask that you don't write something that is blatantly obvious, like "this sets x to 10" or "loop from 0 to 10". And despite having taken 413 a few years ago, I admit that the formatting and comments in my Java files aren't really up to snuff (the style and comments in my C files aren't all that great either. Maybe I need this book as well). So when we were given the chance to go back and clean up our Robocode files, I spent the most time adding in Javadocs for my classes. I also did some refactoring by moving often used methods to a separate robot file (BaseBot.java) and had the simple robots inherit from that robot. Finally, I formatted each of my source code files using an Eclipse formatting template.

What annoys me about coding style is that there tends to be no real standard across all programming languages. For example, in the Elements of Java Style, rule 13 states "Capitalize only the first letter in acronyms". If only Apple used that rule in their Objective C code, then I wouldn't consider using the caps lock key to type NSHTTPURLResponse every time I want to process a request from a web server. Then again, I understand why C programming style generally is the way it is with terse variable names (because of memory constraints). And some languages lend themselves to different styles and conventions. When in doubt, rule #1 in EJS should stand across all languages: "Adhere to the style of the original".

Click here to download my new and more readable Robocode code.

Tuesday, September 8, 2009

Robocode Newbie

Last week was our first introduction to Robocode. Robocode is a game where players create software robots using the Java programming language. These robots can either go head to head with another robot or be placed into a free-for-all melee with many other robots. Games like Robocode are a great way to get Computer Science students interested in programming. If I were teaching an AP Computer Science course in high school, I would have them make robots after they take their AP test as a reward for sticking it out for the entire year.

While Robocode is a great educational tool, it is also complex and deep. The phrase "easy to play, difficult to master" comes to mind. This is especially apparent when viewing the source code of the various movement and targeting strategies on the Robocode wiki. As part of our brief introduction, we were asked to implement the simple robots listed on the assignment page. While implementing the robots, I got to know the Robocode API really well. I did implement all of them, but not without some trouble.
Moving the robot to a specified point is not as simple as "moveRobot(a, b)". While I could've written an implementation that doesn't require trigonometry, I chose to brush up on my old math skills. I created two functions; one that determines how much the robot should turn and one to actually turn the robot.

/**
  * Gets the angle to a point on the battlefield.
  * @param x X coordinate of the point to move to.
  * @param y Y coordinate of the point to move to.
  *
  * @return The angle between the point and the Y axis in degrees.
  */
private double getAngleToPoint(double x, double y) {
  
//Get heading to the center.
  
double distX = x - this.getX();
  
double distY = y - this.getY();
  
  
double angle = Math.atan(distX / distY) * (180.0/Math.PI);
  
  
//Check if the robot needs to move south instead of north.
  
if(distY < 0) {
     angle
+= 180.0;
  
}
  
  
return angle;
}

/**
  * Turns the robot to set its heading to point (x, y).
  * @param x X coordinate of the point.
  * @param y Y coordinate of the point.
  */
private void turnToPoint(double x, double y) {
  
double turnAngle = this.getAngleToPoint(x, y) + (360 - this.getHeading());
  
  
this.turnRight(turnAngle);
}

The trickiest part was implementing Firing04. To track the robot, we needed to predict where the enemy robot was going next and then turn in that direction. Fortunately, the ScannedRobotEvent class provides a lot of information about where the robot is going. We just needed to calculate the angle at which we should turn to track the robot. Using both the law of cosines and the law of sines from trigonometry, I think I came up with a reasonable solution to tracking the robot.

//Steps through turning the gun by this interval.
private static final double TURN_INTERVAL = 20.0;
private ScannedRobotEvent currentEvent;

public void run() {
  
this.setAdjustRadarForGunTurn(false);
  
  
while(true) {
    
//Scan for a robot.
    
if(this.currentEvent == null) {
      
this.turnGunRight(TURN_INTERVAL);
    
}
    
    
//If a robot is found, track it.
    
else {
      
double absoluteHeading = this.getHeading() + currentEvent.getBearing();
      
//Use the law of cosines to approximate the new distance after the enemy moves.
      
double predictedAngle = 360.0 - currentEvent.getHeading() - (180 - absoluteHeading);
      
double predictedDist = Math.pow(currentEvent.getDistance(), 2) + Math.pow(currentEvent.getVelocity(), 2) -
                 (
2 * currentEvent.getDistance() * currentEvent.getVelocity() * Math.cos(predictedAngle));
      
      
//Use the law of sines to approximate the new absolute heading
      
double turnAngle = Math.asin((currentEvent.getVelocity() / predictedDist) * Math.sin(predictedAngle));
      
      
//With this information, turn the gun to find the enemy
      
this.turnGunRight(absoluteHeading + turnAngle - this.getGunHeading());
      
      
//Reset the current event and find the enemy again.
      
this.currentEvent = null;
      
this.scan();
    
}
   }
}

public void onScannedRobot(ScannedRobotEvent event) {
  
this.currentEvent = event;
}

With Firing04, I think I can make a robot that tries to predict the enemy's location and then fires a bullet there. Robots with complicated movements will probably throw it off, but I think I can make a reasonable guess if I incorporate the enemy's size. Implementing a complex movement pattern seems to be the best way to stay alive as well.

(I did post code and I'm done a little early. Please be nice and attribute me if you use it.)

Saturday, August 29, 2009

Redesign Your Home For Free*

* virtually.
  • Overview

    In this blog, I'll be doing a review of Sweet Home 3D to see whether it satisfies the Prime Directives of Open Source Software Engineering. The project's main website describes Sweet Home 3D as "a free interior design application that helps you place your furniture on a house 2D plan with a 3D preview". They also want users to be able to design their interior quickly, whether it's placing new furniture or rearranging existing furniture. It is available in a myriad of languages, including French, Russian, and Chinese.

  • Prime Directive 1

    Prime directive 1 says that "The system must accomplish a useful task". Remodeling a home requires a lot of planning. There tends to be a lot of money involved, so one wants to be sure to get things right. Also, people will probably have to live with their design decisions for a long time. Because of this, many people use interior design programs to get a better idea of how their room will look before making any lasting decisions. While one can go to their local software store and purchase one, having one that's free and open source is a great alternative. Thus, I would say that prime directive 1 is satisfied.

  • Prime Directive 2

    Prime directive 2 states that an external user can successfully install and use the system. Installing Sweet Home 3D is remarkably easy. On OSX, the download from Sourceforge is a dmg file that many Mac users are familiar with. Once the file is opened, the user drags SweetHome3D.app into their application folder, just like many other Mac applications. One might not even think that this application is written in Java at all since it integrates so well with OSX.

    The application itself is fairly straightforward as well. When the application is opened, the list of home items and the floor plan grid are visible in the window. To create a room, you can use the room tool at the top to designate the corners of your room. Here's a sample bathroom that I made in 5 minutes.


    If you need more help, there is a great users guide that describes how to use the various tools to redesign your home. These three things (easy installation, an intuitive interface, and a detailed user guide) help this application satisfy prime directive 2.

  • Prime Directive 3

    Prime directive 3 says that an external developer can successfully understand and enhance the system. To do this, you need to find the source code for the application. The source code for Sweet Home 3D is located on their download page near the bottom. Note that they also provide a Javadoc download for developers who want to understand how the application works. In the source code folder, the README file contains instructions on how to add the Sweet Home 3D project to Eclipse and how to build the project using Ant.
    Below is a screenshot of the imported Sweet Home 3D project. I was also able to run their unit tests in Eclipse.

    Sweet Home 3D also supports plugins to extend functionality of the program. I went through their tutorial and created my own plugin for Sweet Home 3D.
    Being able to both build the app from the source code and develop your own plugins means that the developer can easily enhance the system. Thus, prime directive 3 is satisfied.