friday, 22 march 2013

posted at 17:50

Over the last couple of weeks I've been working on integrating remote filestores into FastMail. We've had our own online file storage facility for years (long before it was cool), and you've always been able to attach a file in your store to an email and save an email attachment to your store. We've been extending that to allow you to use a "cloud" file storage service in exactly the same way.

Our file storage facility is fairly simple in concept, and operates around the traditional files-and-folders model that we've used forever. For the first external service to integrate with we chose Dropbox, mostly because its by far the most widely used, but also because they use the same model and so it was very easy to create an abstraction and slot it behind it. For FastMail subscribers, you can try it out right now on beta. Once we've finished polishing and testing it we'll be releasing it to production. Should only be a couple of weeks away, but don't quote me on that!

When I developed our internal remote filestore abstraction I designed it with the idea that it would be fairly simple to integrate other remote filestores as well. Today I spent a good amount of time working on an integration for Google Drive. I'm mostly doing this to satisfy myself that I have a good abstraction in place, but of course Google is no small fish and I think that it would be wonderful if we could make this available to users as well.

This has not been a simple undertaking. To find out why, lets talk the architecture of our client a bit.

One of the features of our internal API (the AJAX stuff that our client uses) is that it is completely stateless. This has been done deliberately, as it makes it very easy to scale our backend servers. Obviously state can be held (there is a very nice database available all the time) but the API itself has no real concept of state, so it becomes tough to know what and when to store and expire data. So to build anything we start by assuming it will be stateless.

Our attachment system is very simple. There is a file picker that requests metadata for a given path (a standard /foo/bar/baz construction). It gets back name, full path, timestamp and type for the requested folder and its immediate children. When the user selects a folder, a new metadata request comes in for that folder. The server does not care what's gone on before, it just turns paths into lists of metadata. Later to actually attach a file, we call a different method with the wanted and path, and the file data comes back. Like I said, very simple.

So back to Drive. The major reason for it being an utter pain in the backside to integrate is that the API itself has no concept of folders or paths. Now anyone using the Drive web interface will know that it has folders. This is actually something of a lie. A Drive is just a giant pool of files with various properties that you can query on. A file can have "parent" and "child" pointers to other files, which allows a loose hierarchy of files to be constructed. A folder is simply a zero-size file with a special type (application/vnd.google-apps.folder) and appropriate parent and child pointers.

Every file has a unique and opaque ID, unrelated to the file's name. These IDs are what's used in the parent and child pointers. There's no way for us to construct the ID of a file from a path. To find the metadata for our file, we have to follow parent and child pointers around.

So lets say we want to get the metadata for the folder /foo/bar/baz and the files inside it. We start off by getting the metadata for the root "folder", helpfully called root (gotta start somewhere). Along with all the info about that root folder we get back its ID. Lets say its ID is 'root123456' (it won't be, its opaque and apparently random, but this will do for our purposes).

Now we have to find foo. We request the file list, with some search filters (normally all on one line, presented here with newlines for readability):

'root123456' in parents and
title = 'foo' and
mimeType = 'application/vnd.google-apps.folder'

Gotcha 1: deleted ("trashed") and hidden files are returned by default. We don't want those. So actually the filter is:

'root123456' in parents and
title = 'foo' and
mimeType = 'application/vnd.google-apps.folder' and
trashed = false and
hidden = false

Gotcha 2: this query goes into the q= parameter of a GET request, however it needs to be form-encoded rather than the standard URI 'percent' encoding.

(Neither of these gotchas are documented. Good luck).

Assuming it exists, we'll get back a "list" containing one item. I'm not actually sure if two items with the same name and type can exist. Probably, so I return "not found" if I don't get exactly one result yet. That's an implementation detail though, and it might change.

So now we have our foo metadata, we can get its ID and then repeat the process for bar, and so on.

Each of these requests is a separate HTTP call. They're stateless, so various performance tricks can be utilised (keepalives, etc). My servers are on good networks so its not that slow, but its still a lot of round-trips.

Once we drill down that far, we do a final request for a file list with the same filter, this time leaving off the title and mimeType term (we want everything).

'baz123456' in parents and
trashed = false and
hidden = false

Gotcha 3: this will return Google documents, spreadsheets, presentations and the like. These are identifiable by MIME type, and are not downloadable (because they're internal application magic). Their metadata has various URLs for converting to eg Word documents, but these aren't really appropriate for our use. We'd like to filter them out. Unfortunately that means filtering by excluding a specific set of MIME types in the filter:

'baz123456' in parents and
mimeType != 'application/vnd.google-apps.document' and
mimeType != 'application/vnd.google-apps.spreadsheet' and
mimeType != '...' and
trashed = false and
hidden = false

That sucks because you have encode the full list of exclusions right there in the query, so you have to update that when Google adds a new type of document. Instead I've opted to drop anything with a zero size, but there's no size search term available, so instead I've got to pull the lot and then filter.

Anyway, we now have the metadata for the requested path and all its children, so we can return this back to the caller. It takes N+2 HTTP requests to get all the data we need, where N is the number of "inner" path components. This is hardly ideal, but it works well enough, is probably fast enough for most cases (ie there aren't likely to be very very deep folder hierarchies) and isn't even a lot of code.

So next up is search. Our file picker has a "search within folder" option, which looks for a given file name (or name fragment) within a folder and its subfolders. The subfolders bit is actually a significant problem for us here. Finding matching files within a single folder is pretty easy - its just a repeat of the above but the last query gets an additional title contains filter.

Deep search is far more difficult. The obvious approach (and the one I started implementing) is to drill down to the given path, then do a search for files with title contains or subfolders. And then loop through the folders, repeating as we go. The slightly more refined version of that is to drill through the folders, collecting their IDs, then constructing a single filter for the files of the form:

title = 'bananas' and (
    'root123456' in parents or
    'foo123456' in parents or
    '...' in parents
)

Gotcha 4: You can group terms with parentheses. This is not documented.

The trouble here is that this is potentially unbounded. We don't know how deep the the hierarchy goes, or how many branches it has. It wouldn't be a problem if each request was negligible (as it often is with a local filesystem with metadata hot in a memory cache), but here its hugely expensive in a deep hierarchy. As noted above, the regular metadata lookup suffers this but to a lesser degree, as it only ever goes down one branch of the tree.

This is where I got to when I left the office today. The approach I'm likely to take is to request a list of all folders (only), assemble an in-memory hierarchy, drill down to the folder I want, collect all the IDs and then perform the query. So it actually only becomes two requests, though potentially with a lot of metadata returned on the full folder list.

And from there I guess the metadata lookup becomes the same thing really.

And I suppose if I was in the mood to cache things I could cache the folder list by its etag, and do a freshness test instead of the full lookup.

But mostly I'm at the point of "why?". I recognise that around Google search is king, and explicit collections like folders are always implemented in tearms of search (see also tags in Gmail). But folder/path-based filesystems are the way most things work. We've been doing it that way forever. Not that we shouldn't be willing to change and try new things, but surely its not hard to see that an application might want to take a traditional path-based approach to accessing their files?

I'm doubly annoyed because Google is supposed to be far ahead of the pack in anything to do with search, yet I cannot construct a query that will do the equivalent of a subfolder search. Why can the API not know anything about paths, even a in light way? Its clearly not verboten, because parents and children pointers exist, which means a hierarchy is a valid thing. Why is there no method or even a pseudo-search term that does things with paths? Wouldn't it be lovely use a query like:

path = '/foo/bar/baz'

to get a file explicity? Or even cooler, to do a subfolder search:

path startswith '/foo/bar/baz' and title = 'bananas'

Instead though I'm left to get a list of all the files and do all the work myself. And that's dumb.

I'll finish this off, and I'll do whatever I have to do to make a great experience for our users, because that's who I'm serving here. It would just be nice to not have to jump through hoops to do it.

saturday, 2 march 2013

posted at 07:52
tags:

Everyone in the world has awesome ideas, me included. If I have time, I'm going to work on mine, not yours. I already don't have time to do all the things I want to do. Why would I drop everything and work on your thing instead?

Maybe you could convince me that your idea is better than any of mine and I should work on it. Its not unheard of. But you've got to get past the "I don't care" bit first, otherwise I'm just not listening.

There's only one way to do that. Instead of trying to tell me your great idea, show me your great idea. If you're a programmer, implement a prototype I can play with. If you're an artist, draw a series of pictures that can accurately show me how your thing works.

Around Pioneer I often say "patches welcome" or "working code wins". Its not because I hate you, or because I think your idea sucks. Its just that I have my own things that I want to do.

See also: Why Your Game Idea Sucks

thursday, 6 december 2012

posted at 22:42
tags:

First of a series of ramblings about a few things that have been on my mind lately. Applies only to me, not judging anyone else.

There seems to be an increasing tendency in the world, particularly in the scientific and engineering communities in which I move, to hold up logic over anything else. I think that's wrong.

I call myself an "intuitive" programmer. I'm not a deep analytical thinker. I'm bad at math, and I have a terrible memory. Most of my programming and sysadmin work is a kind of gut-feeling, follow-my-nose approach. I can often see a problem coming, or correctly diagnose the cause of a fault, without being able tell you why, at least initially. I'm happy with this - I'm good at what I do, even if my methods are a little unorthodox at times. It is something that my peers have difficulty with at times. I can't always provide data to support something. Fortunately it usually doesn't take long working with people for them to trust me enough to give me the chance.

The world as a whole seems to have things wrong. We hold logic, reason and intellect above emotion and intuition. People are admonished for bringing emotion into an argument, when it is this emotion that allows us to consider how our actions affect other people and come at problems from a different angle. Emotion is important.

It used to get me down that I couldn't keep up with the thinkers, those theorists that could see everything in terms of numbers and processes and systems. As I've gained experienced, I've come to understand that so much of what makes me good at what I do (programmer, sysadmin, husband, father) is that I'm able to use both my mind and my heart to approach a problem. I'm not saying I'm good at either, but I think I've found a reasonable balance that can produce useful results.

Incidentally, that's part of why believing in God and following Christ can work for me. So many people get stuck on how many aspects of God they can't reason around or make sense of. I can construct a logical argument for many facets of God that I understand, but I'm also happy to say that there are some things about God I don't understand in my head even though I know them to be true in my heart. That's not to say I don't want to understand them, but I'm fine with it if I never do.

tuesday, 6 november 2012

posted at 23:21

So today Frontier Developments started a Kickstarter campaign for the next chapter in the Elite game series: Elite: Dangerous. I heard this from half a dozen people within the first hour of it appearing. The most common question is what "what does this mean for Pioneer?". The quick answer is "nothing", but I have some thoughts about the whole thing, so I thought I'd write something.

So what's going on?

Lets have a short history lesson.

Long ago, David Braben (along with Ian Bell) made Elite. Some years later its sequel, Frontier appeared, and shortly afterwards it gained a sequel of its own, Fronter: First Encounters. Reaction to FFE was mixed, but everyone agreed Frontier was pretty fantastic. Life was good.

Braben, via his company Frontier Developments, announced that there would be an Elite 4, but nothing happened for years. Frontier got old, and the community wanted something to fill the void. Tom Morton stepped in with Pioneer, an open-source clone of Frontier. I turned up a few years later and nowadays we're trying to put legs on a game that we now like to say is "inspired by" Frontier.

A couple of people refused to give up, but most people never really thought that Elite 4 would happen. But today we saw the announcment for Elite: Dangerous, the next Elite game.

Is Pioneer now obsolete?

Nah. For a few reasons.

  • Elite: Dangerous looks more like a modern remake of Elite rather than Frontier. One particularly interested tidbit came from David Walsh over at the Frontier forums: Elite: Dangerous will not have time acceleration. Now we've thought long and hard about what the key features of Frontier are that we must maintain for Pioneer, and one of them is the ability to speed up the clock. Its also the feature that makes multiplayer infeasible. So whatever they're making, its not Frontier, and because of that, we still have a niche.
  • We have some ideas that we haven't seen done in any game yet. We can still bring something to the genre.
  • We're having too much fun :)

Does this take attention away from what you're doing?

Doubtful. Indie space games are kind of in vogue at the moment - it seems like we see a new announcement every week. Every time it happens, lots of people pop up pining for Elite or its successors, and usually there's at least one person pointing folks in our direction. We've had a steady trickle of new players and contributors for many months, and I don't see it going away. If anything happens, we'll probably see more people as more people become interested in the genre and are reminded of their youth. Or not, and we'll continue as we always have.

Is it going to work?

Honestly, I have no idea. I have my reservations about Kickstarter for software projects - seems like everyone treats it like magic fairy dust, but we still haven't actually seen any output from any of these multi-million-dollar projects.

On the other hand, someone in #pioneer noted Braben promised a $35 general-purpose computer and was then heavily involved in the delivery of the Raspberry Pi. He's got the track record.

If it doesn't work, then its going to be a very public embarrassment for Frontier Development among the Elite player community, one they may not come back from. I will watch with interest.

Need any help with Pioneer?

Only semi-related, but advertising never hurt anyone :)

We always need help. There's lots that needs doing. There's a few resources on the wiki to help you get started, or come and say hi in #pioneer on freenode.

tuesday, 24 july 2012

posted at 03:21

First day in the office today, and all is going well. I'll write more about that and post some pics later, but for now lets talk about that most mundane of tasks: grocery shopping.

the least-weird products available

For the non-Norwegian in Norway, at least two problems present themselves (I say "at least" because I created more problems of my own):

  1. Pretty much every single word on every single packet is in a language I don't understand.
  2. The prices are all in a different currency on a different scale, making comparisons quite difficults.

I went in thinking I needed to at least grab sugar (for my coffee), shampoo, and something for breakfast and dinner for the next couple of nights. I made a quick spaghetti bolognese last night and saved the leftovers, but I undercooked the spaghetti so I'm not keen on reheating it. Besides, I probably can't live off that for four weeks.

Most of this is what you'd expect, though I hated having to buy 1kg packets of salt and sugar and 1.5L of oil. Maybe I should bake a cake?

The most difficult/controversial item here is probably the butter ("Meierismør"). At home I eat spreadable butter, which is butter with a little oil mixed in to make it softer at room temperature. I don't like the taste of margarine. Most of the products on offer looked to be margarine, which I managed to infer from either the packet having "marg" on it somewhere or the ingredients list having more than a couple of things (ie butter, salt, preservative, that sort of thing). Anyway it looks like I lucked out but man, it took a lot of study to finally select this one. There was a few other things like that, but none quite this tough.

The other crazy thing is the prices. 1AUD is worth about 5-6NOK, depending who you ask. What you see here cost 373,10Kr. According to my bank I paid $59.06. That doesn't seem too outrageous considering there's meat and fish in there. So that's ok. When considering the difference between two products its really hard to suppress my normal instinct about what's cheap, what's expensive and how two prices compare. Consider the shampoo. I paid 21,90Kr, which works out to about $3.50. Prices for shampoo ranged from 15-40Kr. So my warning bells go up with "holy crap, $22 for shampoo and a range of $25? wtf?" when in practice the range is around $2.50-6.50, which is pretty much spot on.

Other things you can do to make your shopping trip harder than it needs to be:

  1. Go in having no particular idea of what it is you need (this applies in Melbourne too).
  2. Go to the supermarket at the work end of the train ride, so you have to public transport your loot all the way home.
  3. Forget that this is Europe where they make you pack your own stuff and don't just give you a bag for every two items.

I was also lucky enough to find a cashier that couldn't (or wouldn't) speak English. Fortunately buying groceries is pretty much the same wherever you go.

Now, dinner!

sunday, 22 july 2012

posted at 01:46

So here I am at Brussels Airport. Its growing on me, but only because I'm sitting up in the departure lounge. The arrivals area is a complete hole. I can't decide if that's the wrong way around - on the one hand, you want to make a good first impression (failed) but on the other you want people to leave with a fond memory (succeeded). It might be a combination of being awake for the best part of the last 24 hours and the spaghetti, chips and beer talking. Its probably not important.

Anyway, the backstory for the uninitiated is that I now work for Opera Software, working from the Melbourne office. I meant to write something about leaving Monash and starting at Opera but hadn't got around to it yet. Maybe next time. In any case, I've been there a couple of months, love it, and now its time to take the pilgrimage to the head office in Oslo, Norway. They seem to like everyone to visit overseas offices semi-frequently, and I have double reason to go in that the fine fellow that I was hired to partially replace lives and works there. It will be extremely useful to be able to get an answer in 30 seconds instead of waiting a whole day and night for an email round trip. Its a good thing!

Of course, I have to leave my dear family, a fact that none of us are particularly happy about. The timing isn't great either because everyone got sick. Francesca has some infection messing her up, so much so that she cried all the way to Coburg where we dumped her at her grandma's house and continued on to the airport. Wife tells me she's getting sick too, which is only going to make things harder for her - my children are delightful but are also about eight full time jobs to take care of. My wife is an amazing woman. I will likely write her an email telling her so very soon and I will likely have to study ancient languages to find words that going even some small way to communicate just how magnificent she is.

Beth cried at the airport and wouldn't let me go. I'm told that Penny looked after her as well as a three-year-old can - touching her a lot and telling her "don't be sad, its ok". I love my family so much! I'm fairly fried right now but I think once I've slept in a bed and had a moment to think I'll really start to miss them.

So the flights. I left Melbourne at around 10pm for a 14 hours flight to Abu Dhabi. Etihad is a fantastic airline - the seats were comfortable, the food was very good and they couldn't do enough to look after you. I got myself signed up for the "Etihad Guest" program which among other things gets me a 5kg increase in my baggage allowance. So between that and freeing up another 5kg by not having to carry engine parts with me on the return flight (help a friend out: buy an oil cooler in Australia, chuck it in your bag and take it to Norway for him), I should have no excuse to bring heavy presents back with me.

I slept fitfully for a few hours which appears to have been enough to keep me alive. Either that or I'm just used to functioning on very little sleep. Most of the rest of the time was spent reading and hacking. Same old story with me.

Abu Dhabi is bizarre. The sun was up at perhaps 5am. We landed at about 6.30am and already it was over 30 C and humid. The airport is fully airconditioned of course but it never quite felt comfortable. You could tell that it was overworked. And the land is brown, so brown. It really is the middle of a desert. Seems a very strange place to attempt to eke out an existence, let alone build a throbbing metropolis, but I guess if its your home then you know what you're doing.

The terminal was large and modern, with free internet that everyone used and as such I couldn't get near. And people everywhere, so many people. And surprisingly good coffee! And everyone spoke excellent English, so really it wasn't so hard to tell that I wasn't in Mebourne. Only a very short stop though, so I didn't really have much time to see anything.

The flight to Brussels was more of the same, I'd gotten used to it now. More book (finished it), more hacking (taking apart Pioneer's starsystem generator, making great progress there). Not really a whole lot to add except to say that I glanced up every know and again at the movie "Deep Impact", and it looks to be equal parts terrible and awesome. I really am tired.

So now I'm in Brussels. As noted, its pretty crap. I'm reliably informed that this extends to the city and probably the country as a whole, so I guess I should give them points for consistency. Got myself a good grilling by the immigration official (because the airport is so retarded that you have to go through customs twice even if you're just transferring), then had to talk to a bored Brussels Airlines representative to a) figure out how to get a boarding pass and b) figure out where my luggage went. After twenty minutes and a good amount of "merci", "pardon" and "je ne comprends pas" because they didn't seem keen on speaking English but that's about the extent of my French, I managed to get it all sorted out.

Anyway I've managed to find my way up to the departure lounge, which is actually quite nice. Quiet, stolen power available, internet paid for (when you haven't had contact with the outside world for a while 20EUR does not seem a lot of money for four hours). Another ~20EUR got me a reasonable spaghetti bolognaise, a beer (Leffe Blond, passable) and a little can of Pringles (ahh globalisation, good wherever you go). Its not terrible, but even if it was I think my standards at this point are too low to notice.

Just had half a chat with my lady via Skype. I say half because we had no sound in either direction and only my video seemed to work, so the actual conversation was conducted via Google Talk. But that was nice just to check in. We'll have to get the technology fixed prooperly before we try it again.

So I'm a couple of hours away from my flight, which should get in to Oslo just after 10pm local. From there I collect my things and catch the train to the Oslo S. I'm hoping to find an English-speaking human there that can a) sell me a map and b) direct me to my apartment. The office there is closed but I'm told the nearby 7-11 has been informed of my arrival and is holding the key for me. And after that it should be as simple as shower and bed. And then I get to spend the remainder of Sunday (after I wake) figuring out where the hell I am and how to get to work so that I can be there spritely and on-time on Monday morning.

While I'm sure visiting another part of the world is great, getting there is a complete pain in the arse. Someone hurry up and invent a transporter. Please.

monday, 4 june 2012

posted at 11:22

I use IRC a lot, both for work and personal stuff. I use bip, an IRC proxy, to keep me in my channels all the time and log stuff so that I never miss a thing. I run it on my home server and connect to it from XChat from work, home or wherever else I happen to be. It works well.

I also use IRC from my phone, using AndChat. I connect directly the networks and channels I'm interested in with that. It works very nicely and lets me keep track of things as I move around, which happens a lot. Unfortunately its at the mercy of the madness that is mobile connectivity, but that's hardly its fault.

Lately though, I've had a problem. AndChat has been unable to hold a connection to Freenode. It will connect fine, but then after a little while if I go to send something, I find the connection has actually dropped in the background. AndChat dutifully reconnects, but by that time I've lost any conversation that was happening. It also meant that the other people in the channel were seeing lots of connects and disconnects from me. Its fairly normal for IRC, but it looks messy and I'm not keen on that.

The thing that I found curious through all of this was that my conenction to work's IRC server never dropped. So its likely not AndChat at fault, but something lower down. I have been upgrading the Android version on my phone quite a bit, trying to find the "best" community version of ICS for it. Its likely there's a change there.

After a lot of searching and piecing things together, the conclusion I've come to is that the particular build of Android (at least, maybe all 4.0.4 builds) don't send TCP keepalives as often as they have in previous versions. Whatever interval is set is longer than the connection idle timeout set by my service provider. That is not a problem for work's IRC server as it sends keepalives far more regularly. Freenode however does not seem to send any at all.

I pointed the phone at the bip proxy for both services, which sees them both losing connection. This appears to confirm my suspicions, and unfortunately also shows that bip doesn't send keep[alives. Happily its open-source, so I can fix it. Into the code we go!

The way to enable keepalives on a socket is quite simple:

int on = 1;
setsockopt(s, SOL_SOCKET, SO_KEEPALIVE, &on, sizeof(on));

Keepalives have three parameters: time the connection has to be idle before keepalives start, interval between keepalives, and number of keepalives sent without response before the the connection is declared invalid. These parameters are defined at the OS level for all sockets, and on Linux default to 7200, 75 and 9. That's right, two hours idle before starting keepalives. Not at all suitable for what we need.

There's no standard interface for changing this parameters on a per-socket basis, but Linux has helpfully provided its own socket options to allow Linux applications to do this. I'm hardly concerned with portability for this hack, so these are exactly what we need:

int idle = 60;
int interval = 60;
int count = 5;

setsockopt(s, IPPROTO_TCP, TCP_KEEPIDLE, &idle, sizeof(idle));
setsockopt(s, IPPROTO_TCP, TCP_KEEPINTVL, &interval, sizeof(interval));
setsockopt(s, IPPROTO_TCP, TCP_KEEPCNT, &count, sizeof(count));

That is, start sending keepalives after one minute idle, send every minute after that, and five missed responses mean the connection is dead. These seemed like reasonable numbers. I don't want to ping too often as each ping makes the phone do work and thus use a little bit of battery. This seems to be working very well though!

monday, 6 february 2012

posted at 21:56

Nine months isn't too long between posts, right?

This is the contents of a brainstorming post on G+. Its some musings about the Pioneer GUI and where to take it. If you've only been reading this blog then you've missed all the work I've done on this in the last few months. Ask if you're interested and I'll throw some pointers your way. Also there's updates on the G+ post, so keep reading that. The modern world of blogging at its finest..

Anyway, I know visually what I want, and its basically what jQuery gives you out of the box - nice swishy transitions, zooms, etc. I think the model is appropriate since we want to expose some parts of this to scripts, but even in core it'd be wonderful to just say "slide this widget onto the screen and once its there, call this method".

In the long-term I'm intending to move Pioneer away from its 800x600 virtual screen, mostly because it looks like shit on a really high-res widescreen monitor. So my intention is that positioning will be done as 0.0-1.0 in x, 0.0-1.0 in y, measured against a chosen container edge. Scaling by default will be aspect-aware, so I can say that a particular image (quad) should use 10% of the horizontal width and its height will do the right thing.

I'm pretty sure I want a fluid layout, where widgets ask for a minimum/ideal/maximum amount of space and are then handed out an amount depending on the layout strategy of the container they're in. This is not so different to what we have now, but we don't use it in a lot of places. I want fixed position to be used very very sparingly, mostly where precise positioning is required, like control panel buttons.

Scrollbars should be automatic at the container level. If scrolling is enabled for a container then it will give out the maximum possible width/height to each component (based on its requested dimensions).

I like the model Unity uses for its UI where each widget has a corresponding style component. This also gives a convenient place for transitions to hook - they simply modify styles. To do positoning or zooming-type animations I think I don't want to try to modify the widget's metrics, but rather apply a transform at draw time.

I'm still not sure how font metrics feed into widget metrics. I've been assuming for a while that I need a way for a widget to say it wants a height of N lines of text, but whenever I try to think of a clear example I can't. Perhaps I don't care - perhaps it just needs to say "I'll take all of the vertical space you can give me, and please scroll me if I blow it". Though I can see that text layout needs to know whether it should prioritise width over height or vice-versa when doing wrapping.

I've done a good amount of research and experimentation with various GUI systems, both for Pioneer itself (Rocket, GWEN, Guichan, etc) and elsewhere (GTK+, jQuery, Zune, etc). They all have things I like and don't like, which is why I think this has gone on so long.

So basically I think I want to do the lightest and most obvious design possible and see what falls out. That's proving a little more painful than I'd like because Pioneer's GUI has way too much global state which has to be fixed first. It will happen though - I'm quite determined :)

wednesday, 18 may 2011

posted at 11:47

I've been learning a lot about fonts in the last week as I work on the foundations needed to turn this:

into this:

I'm not going into detail right now, because I could write reams and still not end up saying much - font rendering is a real dark art. I mostly just wanted to share a little hack that I put together that might be useful to anyone experimenting in the same space.

The short of it is that Pioneer uses FreeType. Since I was getting into working on the font code I took the opportunity to see if there's any lighter alternatives so that we could remove a dependency. The answer is that there really isn't for the kind of things we need, but there is one worthy contender: stb_truetype.h

Its a very simple TrueType renderer in ~1800 lines of C (of which ~500 of that is comments and documentation). Its missing support for a lot of things, but its a single file to include in your project and does a good job of the common fonts that most people have.

The only trouble I had with it is that the very few examples I could find assume you're using it directly with OpenGL. Its probably not an unreasonable assumption, but it made things a little difficult for me because I'm still not great with GL but I'm much better with SDL. What I really wanted was a simply example for SDL so I could get a feel for the API and check its output without having having to wrestle with GL and then wonder if I was getting odd results because I'd done something wrong.

Alas, no such example existed, so I wrote one. Here it is, for internet's sake:

/*
 * sdl_stbtt - stb_truetype demo using SDL
 * Robert Norris, May 2011
 * Public Domain
 *
 * Compile:
 *   gcc --std=c99 -o sdl_stbtt sdl_stbtt.c `sdl-config --cflags --libs` -lm
 *
 * Run:
 *   ./sdl_stbtt <path-to-ttf-file> <text-to-render>
 */

#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/mman.h>
#include <SDL.h>

#define FONT_HEIGHT 32

#define STB_TRUETYPE_IMPLEMENTATION
#include "stb_truetype.h"

int main(int argc, char **argv) {
    if (argc != 3) {
        printf("usage: sdl_stbtt <path-to-ttf-file> <text-to-render>\n");
        exit(-1);
    }

    /* getting the font into memory. this uses mmap, but a fread variant would
     * work fine */
    int fontfd = open(argv[1], O_RDONLY);
    if (fontfd < 0) {
        perror("couldn't open font file");
        exit(1);
    }

    struct stat st;
    if (fstat(fontfd, &st) < 0) {
        perror("couldn't stat font file");
        close(fontfd);
        exit(1);
    }

    void *fontdata = mmap(NULL, st.st_size, PROT_READ, MAP_SHARED, fontfd, 0);
    if (!fontdata) {
        perror("couldn't map font file");
        close(fontfd);
        exit(1);
    }

    if (SDL_Init(SDL_INIT_VIDEO) < 0) {
        fprintf(stderr, "sdl init failed: %s\n", SDL_GetError());
        munmap(fontdata, st.st_size);
        close(fontfd);
        exit(1);
    }

    /* creating an off-screen surface to render the glyphs into. stbtt outputs
     * the glyphs in 8-bit greyscale, so we want a 8-bit surface to match */
    SDL_Surface *glyphdata = SDL_CreateRGBSurface(SDL_SWSURFACE, 512, 512, 8, 0, 0, 0, 0);
    if (!glyphdata) {
        fprintf(stderr, "couldn't create sdl buffer: %s\n", SDL_GetError());
        munmap(fontdata, st.st_size);
        close(fontfd);
        SDL_Quit();
        exit(1);
    }

    /* 8-bit sdl surfaces are indexed (palletised), so setup a pallete with
     * 256 shades of grey. this is needed so the sdl blitter has something to
     * convert from when blitting to a direct colour surface */
    SDL_Color colors[256];
    for(int i = 0; i < 256; i++){
        colors[i].r = i;
        colors[i].g = i;
        colors[i].b = i;
    }
    SDL_SetPalette(glyphdata, SDL_LOGPAL|SDL_PHYSPAL, colors, 0, 256);

    /* "bake" (render) lots of interesting glyphs into the bitmap. the cdata
     * array ends up with metrics for each glyph */
    stbtt_bakedchar cdata[96];
    stbtt_BakeFontBitmap(fontdata, stbtt_GetFontOffsetForIndex(fontdata, 0), FONT_HEIGHT, glyphdata->pixels, 512, 512, 32, 96, cdata);

    /* done with the raw font data now */
    munmap(fontdata, st.st_size);
    close(fontfd);

    /* create a direct colour on-screen surface */
    SDL_Surface *s = SDL_SetVideoMode(640, 480, 32, 0);
    if (!s) {
        fprintf(stderr, "sdl video mode init failed: %s\n", SDL_GetError());
        SDL_FreeSurface(glyphdata);
        SDL_Quit();
        exit(1);
    }

    /* the actual text draw. we loop over the characters, find the
     * corresponding glyph and blit it to the correct place in the on-screen
     * surface */

    /* x and y are the position in the dest surface to blit the next glyph to */
    float x = 0, y = 0;
    for (char *c = argv[2]; *c; c++) {
        /* stbtt_aligned_quad effectively holds a source and destination
         * rectangle for the glyph. we get one for the current char */
        stbtt_aligned_quad q;
        stbtt_GetBakedQuad(cdata, 512, 512, *c-32, &x, &y, &q, 1);

        /* now convert from stbtt_aligned_quad to source/dest SDL_Rects */

        /* width and height are simple */
        int w = q.x1-q.x0;
        int h = q.y1-q.y0;

        /* t0,s0 and t1,s1 are texture-space coordinates, that is floats from
         * 0.0-1.0. we have to scale them back to the pixel space used in the
         * glyph data bitmap. its a simple as multiplying by the glyph bitmap
         * dimensions */
        SDL_Rect src  = { .x = q.s0*512, .y = q.t0*512, .w = w, .h = h };

        /* in gl/d3d the y value is inverted compared to what sdl expects. y0
         * is negative here. we add (subtract) it to the baseline to get the
         * correct "top" position to blit to */
        SDL_Rect dest = { .x = q.x0, .y = FONT_HEIGHT+q.y0, .w = w, .h = h };

        /* draw it */
        SDL_BlitSurface(glyphdata, &src, s, &dest);
    }

    /* done with the glyphdata now */
    SDL_FreeSurface(glyphdata);

    /* wait for escape */
    SDL_Event e;
    while(SDL_WaitEvent(&e) && e.type != SDL_KEYDOWN && e.key.keysym.sym != SDLK_ESCAPE);

    SDL_FreeSurface(s);
    SDL_Quit();

    exit(0);
}

This only uses the "simple" API, so its results aren't as good as stb_truetype is capable of, but it was enough to play and see what the output is like (very good).

As it is, I've settled on sticking with FreeType for a number of reason, but that doesn't take anything away from stb_truetype. If you're looking for basic font rendering without much overhead, do give it a try!

sunday, 20 february 2011

posted at 21:58

Like many games, Pioneer uses Lua for its content generation and dynamic world stuff. It doesn't expose enough of its lucky charms to the world this way though (in my opinion), which is why I'm currently doing a major overhaul of everything Lua related.

From looking at the code there's a couple of ways that Lua has been integrated over time. All the models in the game (ships, space stations, buildings) consist of static data (textures, object definitions) and a Lua script to hook it all together. The script can define a function that gets called as the model is rendered that can actually modify the model. This is how flashing running lights, rotating radar dishes and other things are done. Its quite a clever system. The Lua style used is pretty much plain calls to the Lua C API, with some hand-coded Lua classes for matrices, vectors, and so on.

The other place Lua is used is in the modules. These are essentially plugins that add stuff to the game based on triggers. Its how missions and smugglers appear on the bulletin board and pirates appear in the system when you enter it, to name but two functions. This interface uses a combination of normal Lua and OOLua.

OOLua is a C++ binding for Lua, which provides lots of macro and template magic to make it easy to expose your C++ classes as Lua classes. It automatically handles unpacking the Lua stack and argument typing and whatnot so that when in Lua do something like o = SomeClass:new("abc", 1) and o:thing(23) it'll arrange calls to SomeClass::SomeClass(const char* a, int b) followed by void SomeClass::thing(int a). I'll leave it to you to go and read the theory and code linkes from the OOLua homepage. Its quite interesting, though it took me quite a bit of use and misuse before I really started to get my head around it.

My plan, which of course you read because I linked it above, is to expose pretty much everything that might be useful to people wanting to add to the Pioneer universe as Lua, since that's theoretically easier for non-programmers to get to grips with. A good start has already been made to getting OOLua hooked up, so I decided after a few experiments that my first steps should be to convert all remaining non-OOLua stuff to use it. The big one here is all the code in LmrModel.cpp, which is where all the model magic mentioned above happens (indeed, LMR stands for "Lua Model Renderer", which should give some idea of just how central Lua is to all of this).

The way a Lua model definition works is pretty straightforward. At boot Pioneer loads all the model Lua files, which typically contain calls to define_model(). The arguments to define_model() contain lots of call to functions that define the model. Examples are cylinder(), texture() and so on. As you'd expect, Lua calls all of these functions to assemble the arguments before calling define_model() with the whole lot. LmrModel turns this inside out. When a call is made to eg cylinder(), it actually pushes a bunch of commands like "draw triangle", "draw billboard", "use light" and so on onto a stack. When the final call to define_model() is made, LmrModel attaches that stack to a global model definition. Its a bit unusual and can be prone to errors (eg currently if you call one of the element functions outside of define_model(), you'll usually get a segfault), but it also simplifies the code a great deal because it greatly reduces the amount of data that needs to be passed back and forth between C++ and Lua.

The difficult thing about converting all of this to OOLua is that its all the element functions are static functions, not object methods. OOLua is really only built for proper classes, and has only the most minimal support for calling an unadorned function from Lua. That minimal support does do the stack and type signature handling that I described above, so I've built an entire layer on top of it to sanely handle calling static functions from Lua. Its still under heavy development (you can follow my lua-overhaul branch if you're interested) but its already very functional. Here I want to describe a bit about how it works, because I'm quite proud of what I've been able to do. The details are at the bottom of LmrModel.cpp and in OOLuaStatic.h if you want to follow along.

OOLua requires that all static functions be registered against a Lua class (causing them to appear under its namespace). What that means is that we have to define a class, even its empty. Sucks, but lets do it.

Once there's a class in place, its trivial to register functions against it. A typical call is:

OOLUA::register_class_static<pi_model>(l, "call_model", &static_model::call_model);

pi_model is the already-registered class we're hooking the function to. call_model is the name of the function that appears on the Lua side (so in this case we've just registered pi_model.call_model). The final arg is a pointer to the C function that will be called when the function is invoked.

The function is a standard Lua callback with the signature int func(lua_State*). If you like you can use this as-is, but OOLua provides some extra magic here to use this function as a thunk that does stack unpacking and type checking before passing the call on to a real handling function. A typical callback function for OOLua looks likes:

int static_model::call_model(lua_State *l) {
    OOLUA_C_FUNCTION_5(void, call_model, const std::string&, const pi_vector&, const pi_vector&, const pi_vector&, float)
}

The arguments are straightforward - the return type, the name of the function to call, and the types of its arguments. This will result in a call to:

static void call_model(const std::string& obj_name, const pi_vector& pos, const pi_vector& _xaxis, const pi_vector& _yaxis, float scale)

If the types or number of arguments are wrong, then instead a Lua error will be generated.

So this is all very nice, but has some shortcomings. The simplest is the amount of boilerplate that needs to be written to set up a function. Some simple start/end macros to define the thunk function are all thats necessary.

The next thing I stumbled on is the need for a form of multiple dispatch. OOLua already does this for constructors, but not for method calls, which I find a little odd. What it meant is that I had to impelement it myself. Since Lua is typeless there's really no way short of some educated guessing to make a choice based on types, but choosing the function is possible based on the number of arguments. This is expected in a few places in the existing model interface. For example, texture() in its simplest form requires the name of a texture only, but its also possible to call it with extra args specifying position and transformation of the texture. So we now have two possible functions that we could call. OOLua can't support this directly, so I wrote some macros that when used, expand to (slightly simplified):

int static_model::texture(lua_State *l) {
    const int n = lua_gettop(l)-1;
    if (n == 1) {
        OOLUA_C_FUNC_1(void, texture, const std::string&)
    }
    if (n == 4) {
        OOLUA_C_FUNC_4(void, texture, const std::string&, const pi_vector&, const pi_vector&, const pi_vector&)
    }
    _static_dispatch_fail();
    return 0;

Using the macros, this gets written as:

STATIC_DISPATCH_START(pi_model,texture)
    STATIC_FUNC_1(void, texture, const std::string&)        
    STATIC_FUNC_4(void, texture, const std::string&, const pi_vector&, const pi_vector&, const pi_vector&)
STATIC_DISPATCH_END

That works well. Later I found another problem, which was a bit trickier to solve. The function extrusion() looks like this in Lua:

function extrusion (start, end, updir, radius, ...)

The first three args are vectors, the fourth is a number (float). That's not important though. What's important here is that following the required args comes an arbitrary number of vectors to define points for the shape to extrude. This posed a problem - its easy to create a macro that says "expect 4 or more arguments), but OOLua's function call mechanism fails outright if the number of args on the Lua stack don't match the number of arguments called for.

The solution I settled on was to define a seperate STATIC_FUNC_4_VA macro. When this appears in the thunk definition, it looks for extra arguments on the stack and puts them into a Lua table. It then pushes the table and the number of items in it onto the stack and calls the function with two extra arguments. All this gives the following:

STATIC_DISPATCH_START(pi_model,extrusion)
    STATIC_FUNC_4_VA(void, extrusion, const pi_vector&, const pi_vector&, const pi_vector&, float)
STATIC_DISPATCH_END

static void extrusion(const pi_vector& start, const pi_vector& end, const pi_vector& updir, float radius, OOLUA::Lua_table t, int nt)

STATIC_FUNC_4_VA expands like so:

if (n >= 4) {
    _static_varargs_table(l,n);
    OOLUA_C_FUNC_6(void, texture, const std::string&, const pi_vector&, const pi_vector&, const pi_vector&, OOLUA::Lua_table, int)
}

You can go read _static_varargs_table() too. Its interesting but not really relevant to this discussion.

So right now this is all working wonderfully well. I'm not quite finished refactoring all the functions, but its only a short hop away. But there is one fatal flaw in all this which I'm really struggling with right now. The problem is that all calls made by OOLua via its method/function macros don't have ready access to the lua_State representing the interpreter, which means if at any point OOLua can't do something for you (which is often) and you need to drop back down to the standard Lua API, you're stuck.

In LmrModel this is not a problem as the state is held in a global. In LuaUtilFuncs however there's no such global and indeed, you wouldn't want one, as these functions as used by several different interpreter contexts throughout the codebase.

Its actually tricky to solve this one. Obviously when the registered function is called its called with the Lua context in the args, so we do know it. But we lose it as soon as we ask OOLua to call our function with its fancy typechecks and stuff. We can store it in a global just for the duration of that call, but then we aren't re-entrant which could be a real problem down the track. I don't want to do that.

The only idea I have at this point is to push a pointer to the context onto the Lua stack so that OOLua can unpack it and pass it, but these seems rather heavy. Its not just a pointer either; due to the way OOLua does its type handling I have to push a full object instance. Thats a slight lie; a few primitive types like int and float don't need an object, but I don't want to do crazy stuff like casting pointers to integers to make this work.

I will try that option, but I'm keen to find something else. A thought that occured to me is that perhaps this is all wrong; perhaps it should always be the case that these functions are actually called as object methods. It makes a certain amount of sense. The model definitions could be built up in an object rather than in globals, which paves the way for object loading to be done in parallel in the future. The difficulty with this however is that the pseudo multiple-dispatch that I've implement for functions is not available for method calls, so I've undone a good amount of work.

I think at some point I'm going to need to take all this over to the OOLua author and discuss getting all this implemented properly. Its a fantastic system and I don't want to have to move away from it, but its starting to run out of steam. The author has been very responsive and helpful so I expect there's lots that can be done, which is hopeful.

That'll do for now.

saturday, 5 february 2011

posted at 09:13

Hmm, I don't write here much anymore. As is the case for lots of people, my blogging has suffered in favour of spewing random crap into Twitter, Facebook and elsewhere. I'm actually doing a lot more stuff via "social" sites in general, so I think a redesign of this site might be necessary soon to bring all that stuff into one place so I don't look dormant to anyone that just follows here.

Anyway. Christmas and holidays have come and gone. I spent a good amount of time on the big idea that I mentioned last time, and got it close to finished, but then in performance tests found that that the naive old-school Ajax implementation I'd done wouldn't scale much past 30 users. Thats unacceptable, so I started to read up WebSockets, Comet and other things to reduce network and processing on a web application. I settled on using Tatsumaki to implement a long-polling version, but that meant a rewrite of much of the server and the client. At this point I was well and truly on holiday and my brain had shut off, so I threw the project on the backburner.

This can be a dangerous thing for me, because I inevitably change my mind and do something else. I started watching Babylon 5, a show I'd somehow missed back in the day. Anyone that's read for a while knows my long infatuation with space games, so of course I started looking around for something spacey to do or play. And last week, I found Pioneer.

Pioneer is a clone of Frontier, the sequel to Elite. I always much preferred it to Elite. I think it was mostly because of the scale - I could fly forever, land on planets, and just bum around in my spaceship. So I grabbed the code, built it and had a play. And I got sold on it quickly because its awesome, but has a giant stack of things that need work still. In the spirit of the new "just do it" style I'm trying to live up to, I decided first that I wanted to hack on it and then started playing and figuring out what I wanted to hack on. After a couple of hours play I found a heap of little bugs and tweaks that needed fixing, and because the main coder is an awesome guy, lots of my stuff has already been merged.

Not much else to write. This is mostly a "here's what I'm up to" kind of post, so now you know. If you like space games do grab Pioneer (and pull my integration branch if you just want to see what I'm doing). Cheers :)

saturday, 13 november 2010

posted at 23:10
tags:

Anyone who has spent any time around me knows that I'm something of a dreamer. I think big, and if I get an idea I tend to overthink it to the point where even the thought of starting is completely demotivating. As a result, something I've been thinking about and working on for over ten years has perhaps a thousand lines of code over several rewrites to show for it.

Two articles I've read this week have both challenged me and inspired me to produce something. The first is a blog post by Tom Critchlow describing how he built booklist sharing site 7bks.com in four weeks with no prior programming experience. This is a hugely impressive feat and shows just how much the web has changed in the last few years. It used to be that sites like Geocities existed to make it easy for newcomers to build a website. Now with web frameworks Django and Ruby on Rails and easy-to-use infrastructure like App Engine its easy for newcomers to build a webb application. I've been following Pamela Fox's efforts to bring the Girl Develop It program to Australia. Although its goals are different (bring more women into IT) its recent success reinforces the point: its now easy for people with no prior experience to get into web application development.

The second article is from is from the ever-insightful Rands in Repose on how the recently-released Instagram is so awesome because of everything it doesn't do. I'm not entirely sure of the timespan involved, but the story goes that they've built an amazing mobile photo sharing application and it only took them eight weeks. They did this by remaining completely focused on the core feature set and not allowing any distractions in. Conceptually what they're doing has been done several times before, but by remaining focused they've collected huge amounts of users and standa good chance of beating out the competition.

I found both these stories rather inspiring. I've have my great idea, and I have more experience than many of the people doing these amazing things. It should be easy for me to do all manner of amazing and interesting things, but get mired in details. However I'm now resolved. I've spent much of the day distilling all my plans and dreams and thoughts into one very simple concept and I'm going to have a crack at building something soon. I'm making a huge effort to not overthink it - of course I have some ideas for where it could go, but I'm keeping these as high-level next steps, and not thinking too much about the details.

Unfortunately I don't think I'm going to have much time in the next few weeks to do much, but I'll be taking a proper holiday for four weeks after Christmas and I'd really like to have something to show for it after that.

And no, I'm not telling you the idea yet. Its probably not that amazing or exciting to most people, but its been mine for years and years and not willing to share it until I have something to show for it :)

saturday, 6 november 2010

posted at 16:48

I've recently picked up maintainer duties for Net::OpenID::Consumer and Net::OpenID::Server which have needed some love for a while. I'm starting to get them into shape and have today released developer versions to CPAN. If you're using these modules to implement an OpenID provider or relying party, then I would reatly appreciate you taking the new versions out for a spin to make sure nothing breaks.

All the details are in this post to the openid-perl list: Net-OpenID-* 1.030099_001 now on CPAN

wednesday, 6 october 2010

posted at 12:01

Just a reminder that I'm speaking at the Melbourne chapter of the Google Technology Users Group tonight about the details of Monash's move to Google Apps. If you're in Melbourne and without plans you should come along!

friday, 1 october 2010

posted at 09:44

Well almost. I'm speaking at the Melbourne chapter of the Google Technology Users Group next week about the details of Monash's move to Google Apps. I don't have a lot of speaking experience so I'm a bit nervous but its all coming together nicely and I think its going to be a lot of fun. If you're in the area do come along and support me and learn some stuff too :)

friday, 20 august 2010

posted at 22:26
beth-green

So Australia goes to the polls tomorrow to elect a new Federal Parliament for the next few years. My primary vote this year will go to the Australian Greens. A few people have asked me why, so I thought I'd write a little bit about my thinking. This isn't particularly coherent, just a series of random thoughts that point roughly in the same direction.

I think a solid third choice party is extremely important for a healthy democracy in this country. Without it the two major parties squabble like spoiled children. We need someone to step in and calm things down a bit. There's nothing particularly special about the Greens for this purpose; the Democrats used to fill the gap fairly well before they imploded. I'm also not under any delusions that the Greens wouldn't act the same way if they were one of the major parties. You want someone big enough that their voice and vote can make a difference, but small enough that they can't do anything useful by themselves.

This is even more important in the Senate. If there were only two parties then we'd be faced with either a friendly Senate where the government can do whatever they want without question, or a hostile Senate where the Opposition parade about and flex their muscles and prevent any work getting done. Neither are healthy. Its imperative that there be a solid third party in the Senate to force discussion and compromise to happen.

On the Greens as a party I don't agree with everything they stand for, but I'm not self-important enough to believe that a party concerned with the needs and wants of an entire country should cater to my every whim and fancy. For the most part though their vibe seems to be "look after the planet and each other", and thats something I can support. To their credit they have details of what they stand for on their website all the time, as opposed to the other parties who appear to change their tune depending on what the flavour of the momth is.

I've also thoroughly enjoyed listening to a couple of Greens members speak over the course of the campaign. Scott Ludlam was involved in the recent Communications Forum and I found him to be articulate and engaging while Conroy and Smith resorted to name-calling more than once. Bob Brown I've heard speak numerous times, and he was the same as always - engaging, interested and passionate.

The passion is contagious too. One of the problems with the big parties is they spend too much time looking for power instead of worrying about the right thing. The smaller parties tend to be far more passionate about their interests because you don't join a small party looking to gain power; you join because you care.

I've mostly talked about the Senate here. I live in the relatively safe seat of Menzies in Victoria. Its one of those seats that a few parties run a candidate in but no campaigning actually happens. I think its pretty sad really, because I'd love to vote for a local candidate that actually seemed interested. I had no hesitation in voting for the Labor candidate (Andrew Campbell) in 2007 because went doorknocking in our street during the campaign and was happy to come inside and have a coffee and chat with us. Like I said above, I don't need you to give me everything I want, but make me feel like you care and I'll happily follow you.

This time around there's nothing like that, so I'm left to other means to figure out where my vote is going. For me its more a case of working out who I'm not voting for. The reigning champion is the delightful Kevin Andrews, a man that I frankly find sickening. He has a long history of controversy during his tenure, and I can't in any good consciense give him my support. It doesn't help his cause that he's the only candidate that has sent out any kind of information during this campaign and its all been negative. Of course, I'd still rate him higher than the Family First candidate. I'm not even going to go into why I consider them to be complete nutbags; lets just say that if you're reading this and you disagree then you're probably not in my target audience :)

So I have a Labor and a Green candidate left over. Since I don't know anything about them I'm left to choosing by party. Voting Green first effectively results in a Labor two-party-preferred vote, but with the important point that the Greens get a primary vote data point. While that kind of statistic does not affect the outcome of the election, it does inform party strategists when gauging the opinions of the populace. I want my vote to register the fact that although I'm notionally voting Labor, Green issues are important to me.

So that's my vote. A few other points:

  • If its not already clear, my general preference is towards Labor over Liberal. Tony Abbott strikes me as being a very nice fellow and I'd love to buy him a beer and have a chat with him, but he seems a bit random and kneejerky for a leader. But not just that, some of his crew seem worringly incomptent. I'm looking in particular at Andrew Robb and Tony Smith. At least Joe Hockey gives me a chuckle. But generally, the Liberals feel tired and old. Another term or two in opposition might give them a bit of a chance to regroup.
  • On the Labor side I'm less interested in the personalities and more about what they've done in the last term. They don't seem to have made any particularly huge mistakes, despite all the carry on about the insulation programme and the school halls. Yeah, it wasn't perfect, but nothing ever is. For the most part the approach has seemed right. As for them dropping Kevin Rudd at the first sign of trouble, I don't particularly agree with it but I also know enough about the Labor Party to know that its not going to do much to change policy. As I read back over this I think that I might not be being hard enough on them, but really I just don't see anything of note, good or bad.
  • I'm really confused by Steven Conroy. In the aforementioned Communications Forum I thought he did a pretty good job arguing the merits of the National Broadband Network, but then fell apart completely when talking about the filter. He's been trotting out the same tired arguments for two years now and is borders on foaming at the mouth when confronted with the quite rational arguments to the contrary. I just don't understand how he can be insightful and confident on one issue and a blubbering idiot on another one. He's either unstable or he's in someone's pocket, both of which I consider dangerous. For this reason I'm dropping him way down the list on my Senate ballot.
  • I loathe the Liberals focus on "stopping the boats". Its both a non-issue in the larger debate about immigration as well as just going against my "be excellent to each other" vibe. Australia is a good place to live. For many of the people looking for a way in, their country sucks. We've got an insane amount of room and a crazy amount of spare food. You're no threat to my way of life. Come on in, enjoy the country and add whatever you're good at to make it even more awesome. And yes, there's lots of people who will make a big song and dance about a fat pile of related issues but at the end of the day these are people who need a hand, and we're in a position to help. Why shouldn't we? So anyway yeah, that loses the Libs a few points.

I'm kinda getting tired of typing now, so I'll stop. Ultimately your choice is your own, and I'd never try to tell you that you're making the wrong choice. All I ask is that you make an informed choice, consideringly carefully what you personally stand for and how the people and the parties involved will align with that. Don't fall for the sound bites, they don't tell you anything.

And if you still don't know, or think your vote won't count, then here's a nice tidbit from The Conscience Vote:

And when you go to the polls tomorrow, don't - don't, I beg you - cast an informal vote. If you can't stand either of the major parties, put your vote where your heart is - and don’t let anyone tell you that it won't count. Because you can bet that when the figures finally come in from the Electoral Commission, strategists and analysts from both sides will be going over the fine detail. Every vote that bleeds to the Greens or a minor party is a signal of discontent with the status quo.

And you're not "sending a message", regardless of what Mark Latham tells you. You're just lumped in with every ballot paper that was incorrectly filled in, illegible or just plain doodled on. If you want to send a message, do it with a valid vote.

Every single vote matters.

And remember to watch the ABC coverage of the count tomorrow night. There's graphs and stuff - its geeky and cool :)

friday, 6 august 2010

posted at 19:23

Disclaimer: I work for Monash University, but I don't speak for them. Anything I write here is my own opinion and shouldn't be relied upon for anything.

So Google Wave has been put out to pasture. That makes it a good time for me to write a bit about what I've been working on in the last few months and what I think about the whole thing.

For those that don't know, I work for Monash University as a web developer and Google Apps specialist. We've spent the last ten months rolling out the Google Apps suite to our staff and students. We completed our rollout to some ~150K students last month, and so far have about 10% of our staff across. A big part of the reason we've been able to move that fast is that for the most part, people are extremely excited about Google technologies and how they might use them for education. That excitement goes to the highest levels of the University (one of our Deputy Vice-Chancellors was the first production user to go) and has seen Google Apps being included in our recently-announced strategy for online and technology-assisted learning and teaching, the Virtual Learning Environment.

The interest from our users in Google extends beyond the Apps suite of products to pretty much every product that Google offers, and perhaps none more so than Wave. Through the eEducation centre Monash has already been doing a lot of research into how teachers and students can teach and learn from each other (instead of the traditional top-down lecture style) and how technology can assist with that. Groups sharing and building information together is really what Wave excels at, so it wasn't long before we started seriously considering whether or not Wave was something we could deploy to all of our users.

There were three main issues that needed to be addressed before this could happen:

  • Wave doesn't have enough features to allow a lecturer or tutor to control access and guide the conversation flow.
  • The sharing model opens some potential legal issues surrounding exposure of confidential information, particularly to third-party robots.
  • The Wave UI does not meet the stringent accessibility requirements that University services must meet

Over the last few months we've been working with the Google Wave team to address these issues.

The first is simply a case of the Wave team writing more code. Its well known that they have been thinking and working on access control stuff. Plans exist for limiting access to a wave to a set group of users, allowing robots to better manage the participants in the wave, locking the conversation root down so that users can only reply, and so on. In many ways its the easiest thing to fix, and given the commitment from the Wave team to talk to us and do something to help with what we needed we were never particularly concerned about this stuff.

I won't comment much on the legal side of things, mostly because I don't understand most of it. I do know that its a serious issue (eg Victorian privacy law is perhaps the strictest in the world) but its something that our solicitors have been working on and it probably would have come out ok in the end, if for no other reason than if it didn't people would just use the public Wave service with no protection at all. Users are notoriously bad at looking after themselves :)

The accessibility issues are where my interest in Wave came from so I'll spend a little time there.

I'll be the first to admit that I don't really get accessibility. I am in the happy position of having my whole body working as designed and to my knowledge all my close friends and family are the same, so I really have very little exposure to the needs of those who are perhaps not so fortunate. What I do understand though is that its critically important that information be available to everyone equally and achieving that is far more complicated than the old tired lines of "add alt attributes to your images" and "don't use Javascript". So I'm very happy to follow the lead of those who do know what they're talking about.

Not far away from me in my building we have a wonderfully competent team of usability and accessibility experts. They were asked to do an accessibility review of the Wave client and perhaps not surprisingly, it failed hard. Most of it comes from the difficulty of expressing to assistive technologies (eg screen readers) that something in a page has changed, particularly with proper context. The Wave client builds a complex UI on the fly (eg as the wave is incrementally loaded) and of course has realtime updates. At a more basic level though the static parts of the interface are constructed without using semantically-correct markup. A user agent (eg a screen reader) that scans the page looking for interesting things like links pretty much comes up with nothing.

The accessibility team presented their findings to some people from the Wave team and the response from where I sat appeared to be equal parts of surprise and dismay. They were receptive to the issues raised though. I travelled to Sydney shortly afterwards for DevFest and had the opportunity to chat to some of the team and they all had seen or heard of the report, so it would appear that it was taken seriously.

For me though, I could see that this had the potential to be a real showstopper to our deployment and I didn't want that as I could see the potential for Wave to be a game-changer. Since at the time I knew very little about accessibility, I started work on answering more technical but somewhat related question: "can Wave work without Javascript?". The Wave team had just released a data access API so I set to work trying to build a client using it. That work grew into the (still unfinished) ripple which more or less answers the question in the affirmative. This type of client doesn't solve the accessibility issues but its definitely a step in the right direction.

The part of ripple that I'm most proud of is the renderer. Rendering a wave well is actually quite a complicated prospect. Styles and links are represented as ranges over the static textual content. Its possible for these ranges to overlap it complex ways that make it difficult to produce semantically-correct HTML. It took three rewrites to get it there, and there's still a couple of little nits that I would have addressed sometime if this code had a future, but I mostly got there and I was happy with it :)

Anyway, these problems were being addressed, a few areas around the university started doing research and small pilots usng Wave, and it all seemed to be only a matter of time. I started work on a robot/data API client library for Perl for two reasons, one being that ripple really needed its server comms stuff abstracted properly and two being that we're a Perl shop and we would soon want to host our own robots and integrate them properly into our environment.

This was a great opportunity for me to learn Moose and my suspicions have been confirmed - Moose is awesome and I'll use it for pretty much everything I do with Perl moving forward. A few of weeks later and we get to Wednesday night and I've got things to the point where you could have a nice conversation with a Perl robot. And then I got up Thursday morning and heard that Wave was going away and all my code just got obsoleted.

I was shocked initially, but I was surprised that I didn't feel angry or sad or anything. I can hardly call the time I spent on it a waste as I learned so much (Moose, tricky HTML, accessibility, operational transform) and met some incredibly smart an awesome people, both at Monash, at Google, and elsewhere. I think for the most part though I was ok with it because its probably the right decision.

We (as in the Wave users and developers everywhere) have been playing with Wave for over a year, and we still don't know what it is and what its for. Unless you have the ability to build your own extensions it doesn't really do much for you. The interface is painful, the concepts within don't match anything else we're used to and despite various mechanisms for exposing internal data, you're still pretty much confined to the Wave client if you want to get anything useful done.

The technical issues would have been addressed with time. We would have gotten enough functionality to write a full-blown replacement client. It would have gotten much easier to expose not only data but the structure of data in other applications. But if you take that to its conclusion, Wave becomes a backing store for whatever frontend applications you build on top of it.

But what of the interface? By having lots of different ways to structure and manipulate data Wave tries to let you focus on the task at hand rather than the structure of the data. Traditional applications (web-based or desktop) are tailored to their own specific data models, so we have seperate apps for email, calendars, spreadsheets, etc. Wave wanted to pull all that information together so you could work on all the pieces of your puzzle in the same space. You start a document then realise you need some help, so you bring in friends and talk about the doc as you build it together. You need to process some data, so you drag in a spreadsheet gadget. You embed images or videos or whatever else you need to add to the discussion. Robots can help out by mining external databases and manipulating in-wave data to present even more rich information and even allow feedback via forms. Its all a nice idea, but how do you represent the different kinds of data and structure effectively? Wave tried, and we tried, but I'm not convinced anyone really had a clear idea of how to build an interface that makes sense.

It might not have been an interface issue. It might be that people want to have seperate loosely-integrated applications, one for each of the different types of data they want to manipulate. I don't think thats the case, but I think that a clearer migration path from those other applications would have helped a lot. People first came to Wave wanting to do their email in it. What if from the outset they could have easily pulled mail into Wave and if there was a "mail mode" that allowed some manipulation of Wave data in a way that they were familiar with? What about doing similar things for other data types? I'm don't know how much difference that sort of thing would have made, but something, anything to answer the "what do I do with this" question that everyone had that the start couldn't have hurt.

Wave's failure may also just be a problem of timing and circumstance. The Wave team have regularly acknowledged that they were surprised by the response. The message was supposed to be "we made something different, what do you think?". Unfortunately it was painted in the tech media as an "email killer", which of course it wasn't, but of course that's going to get everyone interested. Being such an early preview Wave was naturally buggy and slow and couldn't accomodate the load caused by the droves of users that wanted to play. So you got swarms of people banging down the door to see what all the fuss is about, and the few that got in found that it wasn't what they'd been led to believe it was and none of their friends could get in so they couldn't try it for what it was. So naturally they disappeared, disappointed, and even later when the bugs were fixed and the system was stable the first impression stuck and those users couldn't be lured back. And although there was a bit of a second wind a couple of months ago after I/O 2010, the same "what to do I now?" question came up.

From what I've seen of Google in the past, they're willing to take a risks if they see a likely or even possible positive outcome. But looking at Wave, how much future did it really have? We loved it, and we saw that it could do things better than existing services (though with some effort), but was it really going to displace them for the casual user? Was it going to make any serious money for Google? Was it ever even going to break even (remember that it takes plenty of infrastructure and manpower to develop and maintain things like this).

Based on all of this, you can totally understand an executive saying "guys, I see what you're trying to do, and thanks for trying, but the numbers just don't add up". Its not like its been a complete waste - there's some awesome technology thats already finding its way into other Google applications (eg Docs now has live typing just like Wave).

So is Wave dead? The product is, but as a concept it lives on. We're fortunate that Google and others have given us plenty of docs and code and their pledge to open-source everything remains. Then there's the third-party protocol implementations that already exist, both open-source (eg PyGoWave, Ruby on Sails) and commercial (eg Novell Pulse, SAP StreamWork). It will take some work, but any one of us could build and deploy another Wave. The question is, would you want to? I think its more likely that we'll see people incorporating bits of the technologies and concepts into new products. And maybe, just maybe, in a few years time some of the work that Wave pioneered will be commonplace and people will be amazed and we'll be those old curmudgeons saying "eh, Wave did that years ago".

So for Monash, we'll continue working on our existing plans. We've mostly been looking at Wave as a delivery platform for what we wanted to do. Not having it available means we'll have to look elsewhere for the technology we need (whether thats buying or building), but our direction won't change.

And for me? I won't continue work on ripple and Google::Wave::Robot code but they'll live on in GitHub should anyone want to rip them off for anything. My next project is building an OpenSocial container in Perl with a view to integrating it into the Monash portal (my.monash, which is where my "Web Developer" duties lie); hopefully I'll write something about it! I will however be hanging around Wave until the bitter end and I would like to do something with operational transforms in the future as they look really cool and interesting. See, its not dead, really!

And to any Wave team reading this, thanks guys. You've kept my interest and my enthusiasm alive, you've put up with my incessant questioning and harassment and you've contributed more good ideas and happiness to me and my colleagues than you're probably aware of. For the few of you that I've met and worked with already, I really hope that this isn't the end and that we get to work together in the future. I'll probably stalk you for a while to see where you end up because frankly, people are far more interesting than technology and you've all proven yourselves. Cheers :)

saturday, 12 june 2010

posted at 21:32

Today I declared myself officially on the Google Wave bandwagon when I released a tiny Wave client called ripple. I wrote it to see if it would be possible to make Wave work in only HTML, something we may soon want for work if we're to provide an accessible alternative interface to Wave for our users.

From what I'm hearing from the Wave crew, this is also the first example of doing something significant with Wave in Perl. That's exciting.

There's more detail on the splash page, but here's some quicky links to get you started:

friday, 14 may 2010

posted at 09:28

Another little program from my toolbox. This one I'm quite proud of. Its a tiny little file transfer tool I call otfile (that is, one time file).

The idea is this. Quite often I need to send a file to someone on the work network. These can vary from small data files or images to multiple gigabytes of raw data or confidential documents. Our network is fast and the network and servers themselves are considered secure so I don't have to worry about eavesdropping, but there's a real problem with the transport mechanisms - they all suck.

I can put the file in an email, but there are transmission and storage size restrictions. Its also fiddly - create message, attach file, send.

I can put the file on a web server, but the only ones I have ready access to are publically-accessible, so I have to set up an access control. If its a large file then I have to think about disk space on the server (usually an issue) and then I have to wait while the file copies before sending the recipient a link. Oh, and I have to test that link myself because invariably I've screwed up file permissions or something else.

Probably the closest to what I want is file transfer via IM, but for various reasons that's currently blocked at the network level. I could probably get that block changed but it'd mean a bunch of negotiations for something that isn't actually related to my job. Its not worth my time.

So I wrote otfile. You run it with a single file as an argument, and it creates a web server on your machine with a randomised url for the file. You paste the url into an instant messaging session (I'm chatting with my team all day long). They click it, file downloads directly from the source, and then crucially, the script exits and the web server goes away. That url, while open for anyone to connect to, is near impossible to guess and only works once. That's secure enough for me.

The major thing I think this is missing right now is the ability to do multiple files at once. Its not that big of an issue because its pretty easy to run multiple instances - just a shell loop. If I went for multiple files I'd have to decide if I want to make it produce multiple urls (a pain to paste and to then require someone to click on them all), produce a directory listing (what are the semantics? when do the files disappear? when does the server shut down?) or build some kind of archive on the fly (cute, but is that painful for the receiver?). I'll probably just dodge it until I use it like that enough to be able to ask the receiver what they would have expected.

#!/usr/bin/env perl

use 5.010;

use warnings;
use strict;

use autodie;

use File::MMagic;
use File::stat;
use UUID::Tiny;
use Sys::HostIP;
use URI::Escape;
use Term::ProgressBar;

use base qw(HTTP::Server::Simple);

my @preferered_interfaces = qw(eth0 wlan0);

say "usage: otfile <file>" and exit 1 if @ARGV != 1;

my ($file) = @ARGV;

open my $fh, "<", $file; close $fh;

my $mm = File::MMagic->new;
my $type = $mm->checktype_filename($file);

my $size = (stat $file)->size;

my ($fileonly) = $file =~ m{/?([^/]+)$};

my $uuid = create_UUID_as_string(UUID_V4);

print "I: serving '$file' as '$fileonly', size $size, type $type\n";

my $server = __PACKAGE__->new;

my $interfaces = Sys::HostIP->interfaces;
my ($ip) = grep { defined } (@{$interfaces}{@preferered_interfaces}, Sys::HostIP->ip);

my $port = $server->port;
my $path = "/$uuid/".uri_escape($fileonly);
my $url = "http://$ip:$port$path";

print "I: url is: $url\n";

$server->run;

my $error;

sub setup {
    my ($self, %args) = @_;

    print STDERR "I: request from $args{peername}\n";

    if ($args{path} ne $path) {
        $error = "403 Forbidden";
        print STDERR "E: invalid request for $args{path}\n";
    }
}

sub handler {
    my ($self) = @_;

    if ($error) {
        print "HTTP/1.0 $error\n";
        print "Pragma: no-cache\n";
        print "\n";
        return;
    }

    open my $fh, "<", $file;

    print "HTTP/1.0 200 OK\n";
    print "Pragma: no-cache\n";
    print "Content-type: $type\n";
    print "Content-length: $size\n";
    print "Content-disposition: inline; filename=\"$fileonly\"\n";
    print "\n";

    my $p = Term::ProgressBar->new({
        name => $fileonly,
        count => $size,
        ETA => "linear",
    });
    $p->minor(0);

    my $total = 0;
    while (my $len = sysread $fh, my $buf, 4096) {
        print $buf;
        $total += $len;
        $p->update($total);
    }

    $p->update($size);

    close $fh;

    exit;
}

sub print_banner {}

I really need to set up a repository for things like this. Not hard to do of course, I'm just not sure if I should have one repository per tool, even if its just a single file, or all these unrelated things in one repository. I'll probably just do the latter; its way easier to manage.

wednesday, 28 april 2010

posted at 22:36

Its kind of hilarious that out of everything I've done in the last couple of months this is the thing I decide to come up for air with, but its been that kind of a day. This is the result of three hours of study and hacking. Its using the new IMAP OAUTH mechanism implemented by Gmail to let me login as one of my users via IMAP.

#!/usr/bin/env perl

use warnings;
use strict;

use Net::OAuth;
use URI::Escape;
use MIME::Base64;
use Mail::IMAPClient;

# user to connect as
my $username = q{some.user};
# apps domain
my $domain   = q{some.domain.com};
# oauth consumer secret. dig it out of the "advanced settings" area of the apps dashboard
my $secret   = q{abcdefghijklmnopqrstuvwx};

my $url = 'https://mail.google.com/mail/b/'.$username.'@'.$domain.'/imap/';

my $oauth = Net::OAuth->request('consumer')->new(
    consumer_key     => $domain,
    consumer_secret  => $secret,
    request_url      => $url,
    request_method   => 'GET',
    signature_method => 'HMAC-SHA1',
    timestamp        => time,
    nonce            => int(rand(99999999)),
    extra_params => {
        'xoauth_requestor_id' => $username.'@'.$domain,
    },
);
$oauth->sign;

my $sig = $oauth->to_authorization_header;
$sig =~ s/^OAuth/'GET '.$oauth->request_url.'?xoauth_requestor_id='.uri_escape($username.'@'.$domain)/e;
$sig = encode_base64($sig, '');

my $imap = Mail::IMAPClient->new(
    Server        => 'imap.gmail.com',
    Port          => 993,
    Ssl           => 1,
    Uid           => 1,
);
$imap->authenticate('XOAUTH', sub { $sig }) or die "auth failed: ".$imap->LastError;

print "$_\n" for $imap->folders;

I guess three-legged OAuth would be pretty similar to get going, but I don't have a particular need for it right now.

thursday, 4 march 2010

posted at 09:51

Nothing big to report, just work plodding along, so here's an update in pictures.

Here's the fully assembled N64 RGB DAC board:

It's not currently working though, so some debugging is required. When I hook it up I get no picture, and occassionally it seems to short out the whole console. I haven't really had time to diagnose it properly yet. I'm mostly waiting to figure out a systematic approach, since its all a bit confusing right now.

My other project is the USB-to-N64 controller bridge. I've written a lot of AVR assembly for this so far but haven't done any actual hardware work. Its coming very soon though, so I very carefully removed the cable from one of my controllers (so I have the option of putting it back together later) and added some pins to the end so I can connect up a breadboard:

This is my first outing with shrinkwrap tube and it was a breeze. My wife has a hot air gun that she uses for her craft work, mostly with embossing inks, so I borrowed it and it worked brilliantly. I was surprised at how much physical strength it actually gives.

This is the insides of my N64:

The large ribbon is the digital signal tap for the RGB DAC, soldered to the inputs on the existing composite DAC chip (easier to solder to that chip than to the much narrower output pins on the video chip. The brown/black pair in the left is the 3.3v power feed for the RGB DAC. Over there on the right under everything is a DE9 D-sub connector with lines for the outputs from the RGB DAC (the narrower ribbon), audio from under the multi-out (the purple/gray/white ribbon) and a 5v line that's needed for some SCART signalling (the fat red wire). Right now its actually hooked to a 3.3v line under the board because I was testing something. Soon I'll hook it instead to the 5V regulator you see just to the right of the composite DAC.

Finally, some recent ebay loot:

Clockwise from top left: a pack of 78L33 3.3v voltage regulators; a sheet of 6 74HC374 8-bit latches and 4 74HC04 inverters; an anti-static pack containing two ATmega88 microcontrollers; a giant roll of 560-ohm 1% resistors (190 left on the roll); a tube of 74HC04 inverters; and a pack of 10n ceramic capacitors (which I use for IC bypass caps).

As I've mentioned before, ebay is an incredible source of cheap parts. There's less than $30 of parts in this picture, and that's not everything I've bought recently. I love getting home every second day and there's a little parcel waiting for me!

tuesday, 9 february 2010

posted at 08:49

My laptop has had an interesting couple of days. The main filesystem went read-only a couple of nights ago after a couple of random journal errors. After being fsck'd and cleaned up it did it again, so I reinstalled it and restored it from backup yesterday. Then last night it overheated, leading me to open the case and clean the wall of dust out of the fan. Its back together now, but a couple of lost of tiny parts means I have no indicator lights and no trackpoint. Fortunately the trackpad still works, but its taking a little getting used to. On the other hand, its not burning my lap or my hands anymore, so its probably an overall victory though its not quite feeling that way yet.

One of the things I did lose in the rebuild, due to it not living in one of my backup locations (which is /etc/ and /home/rob) is my cute little mobile roaming script. I rewrote it on the bus on the way home yesterday and thought that perhaps its interesting enough to post here.

The basic idea is that every day I switch between at least two networks. My home network has a PC in the hall cupboard which among other things runs a web proxy and a mail server. Its also the firewall, so web and mail traffic can't go out directly. Work on the other hand, implements transparent proxying (with some network authentication) and has a SMTP server, but naturally it has a different address. I also occassionally use other networks (friend's places, coffee shops, etc) which usually have no facilities at all, requiring me to fend for myself.

My laptop runs a SMTP server (Postfix) of course, because that's just what you do on Unix. I also run a Squid proxy which I point all my local HTTP clients at. This way, when I move networks, I only have to reconfigure the local proxy rather than tweak every web client I have.

I spent a long time looking for a decent roaming reconfiguration package, but I never managed to find one. Some would try to do network detection and too often get it wrong. Some would have overly complicated and/or feature deficient configuration languages. I vaguely recall that I really liked one of them but it was tightly integrated with NetworkManager, which I don't use because it could never seem to keep the network alive for more than a few minutes (and it appears to be pretty much tied to the GUI, which is painful when I need network on the console).

So, in the finest open source tradition, I rolled my own. The script itself is trivial; its just a tiny template expander. I'll list the script in a moment, but first I'll talk about its operation.

The script, which I call location, takes a location name on the command line (like home or work), runs over a (hardcoded) list of config files, reads them in, modifies them, and spits them out to the same file. It makes modifications according to templates that may exist in the file. If the file has no template, then location ends up emitting the unchanged file.

In any file you want it to modify, you add an appropriate template. This is the template I have in my /etc/postfix/main.cf:

### START-LOCATION-TEMPLATE
##@ home relayhost = lookout.home
##@ work relayhost = smtp.monash.edu.au
##! /etc/init.d/postfix restart
##! sleep 1
##! /usr/bin/mailq -q
### END-LOCATION-TEMPLATE

When it finds itself inside a template, location stops its normal operation of outputting the lines of the file as-is and instead starts parsing. Interesting lines begin with ##, anything else is ignored. Its the third character that determines how the line is interpreted. So far I have the following functions:

  • #: do nothing, just output the line
  • @: emit if at location. If the location specified on the command line matches the first argument to @, then the rest of the line is added to the file as-is.
  • !: run command. Calls a shell to run the specified command after the file has been generated.
  • >: interpolate line. Include the rest of the line in the file, but expand any %variable%-type markers. So far only %location% is defined, and is replaced with the location specified on the command line.

(I'll provide an example of that last one in a moment).

So in the case of main.cf, lets say we ran location with home as the location. This would result in the template section being written to the output file as:

### START-LOCATION-TEMPLATE
##@ home relayhost = lookout.home
relayhost = lookout.home
##@ work relayhost = smtp.monash.edu.au
##! /etc/init.d/postfix restart
##! sleep 1
##! /usr/bin/mailq -q
### END-LOCATION-TEMPLATE

The listed commands are then run, which cause Postfix to be restarted and the mail queue to be flushed:

/etc/init.d/postfix restart
sleep 1
/usr/bin/mailq -q

Naturally Postfix interprets the template parts of the file as comments, so nothing to worry about. The next time location is run, the "bare" relayhost line is ignored, so it doesn't get in the way.

The config for Squid is similar. Because Squid's config file is huge, I don't quite trust my script to handle the whole thing sanely, so at the bottom of squid.conf I've added:

include /etc/squid/location.conf

And in location.conf I have:

### START-LOCATION-TEMPLATE
##@ home cache_peer lookout.home parent 8080 0 default
##@ home never_direct allow all
##! /etc/init.d/squid restart
### END-LOCATION-TEMPLATE

By default Squid will try and hit the internet directly, which is fine for work and unknown locations. For home, i need to force it to always go to an upstream proxy, which is what those the cache_peer and never_direct directives will achieve.

The proxy at work used to be an authenticating proxy, so I had to specify both a peer and a username/password combination. This made the required amount of variable config a little unwieldy to be include in a template, which is where the > function came from. location.conf used to have this:

##> include /etc/squid/upstream.%location%.conf

Which would arrange for upstream.home.conf, upstream.work.conf, etc to be included depending on the location. There's every chance this will come in useful again one day, so I've left the code in there for now.

Here's the script in its entirety:

#!/usr/bin/env perl

use 5.010;

use warnings;
use strict;

my @files = qw(
    /etc/squid/location.conf
    /etc/postfix/main.cf
);

use autodie qw(:default exec);

use FindBin;

if ($< != 0) {
    exec "/usr/bin/sudo", "$FindBin::Bin/$FindBin::Script", @ARGV;
}

say "usage: location <where>" and exit 1 if @ARGV != 1;

my ($location) = @ARGV;

for my $file (@files) {
    say "building: $file";

    my @out;
    my @cmd;

    open my $in, "<", $file;

    my $in_template = 0;
    while (my $line = <$in>) {
        chomp $line;

        if ($line =~ m/^### START-LOCATION-TEMPLATE/) {
            $in_template = 1;
            push @out, $line;
            next;
        }

        if ($line =~ m/^### END-LOCATION-TEMPLATE/) {
            $in_template = 0;
            push @out, $line;
            next;
        }

        if (!$in_template) {
            push @out, $line;
            next;
        }

        my ($tag) = $line =~ m/^##([#@!>])/;
        if (!$tag) {
            next;
        }

        given ($tag) {
            when ('#') {
                push @out, $line;
                next;
            }

            when ('@') {
                push @out, $line;

                my ($want, $rest) = $line =~ m/^##@ (\w+) (.*)/;
                if ($want eq $location) {
                    push @out, $rest;
                }

                next;
            }

            when ('!') {
                push @out, $line;

                my ($cmd) = $line =~ m/^##! (.*)/;
                push @cmd, $cmd;

                next;
            }

            when ('>') {
                push @out, $line;

                my ($rest) = $line =~ m/^##> (.*)/;

                $rest =~ s/%location%/$location/g;

                push @out, $rest;
            }
        }
    }

    die "$file: unclosed location template" if $in_template;

    close $in;

    open my $out, ">", $file;
    say $out $_ for @out;
    close $out;

    for my $cmd (@cmd) {
        say "running: $cmd";
        system $cmd;
    }
}

Because its so trivial and I only run it a couple of times a day, I just run it when I get to work (location work) or when I get home (location home). If I felt inclined I could probably hook it up to my network stuff but I think that would be more trouble than its worth.

On occassion I have to use Windows on the same machine. I have no idea how to achieve something similar there, so I just reconfigure my browser. Fortunately I don't go there often, and almost never from work. This is why I like open source. I can make my system work in exactly the way I want and usually with a minimum of fuss.

thursday, 4 february 2010

posted at 21:23

A couple of weeks ago I placed an order with BatchPCB for the N64 RGB DAC board. Today I received two of them!

I'm quite excited to see the design all professional looking. If I'm really lucky they might even work!

As I understand it sometimes the fabs will make extras of a board in case something goes wrong. If they all come out fine then there's not much to do with the extras, so they just chuck them in as a bonus. That's good; now I have a spare if something goes wrong and I don't feel like I got such a bad deal with the insane postage cost.

eBay is the best source of cheap components in bulk. There's hundreds of stores all selling manner of things in huge quantities for mere pennies and half of the time with free shipping. This time around I've picked these up:

All for the bargain basement price of $22.50. Sure, I have to wait a couple of weeks, but I'm not in any hurry here. If I'd been really smart I would've ordered the parts a couple of weeks ago. Oh well :)

friday, 29 january 2010

posted at 09:28

Ok, so what am I working on. Part two of my N64 modernisation project is to arrange it so that USB controllers can be used with the Nintendo 64.

The reasoning here is pretty simple. N64 controllers have a design flaw. I don't fully understand it, but the gist is that there's a magic powder inside the analog sticks that gives them their "springiness" and makes them return to centre. As the controllers wear, the powder escapes and it gets to the point where the sticks won't return to centre anymore as well as losing their sensitivity. Mine have held up pretty well, mostly because I've gone to great pains to take care of them, but they're fifteen years old now and they're starting to show it.

Obviously these controllers aren't manufactured anymore. Its not enough to buy used ones, for obvious reasons, and new ones are difficult to come by. Not impossible - I've seen them on eBay and in shops like Gametraders and Cash Converters, and I guess I wouldn't need to buy many of them, but still, they aren't exactly cheap or plentiful.

A better option is USB gamepads. As you'd expect from any PC peripheral, they're common as dirt and the good ones (eg Logitech or XBox) are comfortable, sturdy and responsive. So my thought has been to arrange it so that these sticks can be used with the N64.

First stop: Google. There's lots of projects where people have interfaced N64 controllers to something else (USB, parallel, Gamecube/Wii) or interfaced similar controllers (Gamecube) to the N64. As far as I've been able to tell though, nobody has ever got a USB stick going on a N64. Great, new territory - seems I can't avoid it.

I've spent the last couple of weeks researching and thinking and while I haven't yet done any testing with real hardware, I think I have a rough concept for how an interface might work. There's three aspects to it:

  • Acting as a USB host controller and HID class implementation.
  • Translating USB gamepad data into equivalent N64 button/position data
  • Speaking the proprietary N64 controller protocol.

The translation is fairly straight forward. The gamepads I'm interested in (I'll be using a Logitech Dual Action pad for my testing) have (at least) two sticks, a direction pad, four buttons and two shoulder buttons. There's enough here to map to the N64 layout, which is one stick, a direction pad, two buttons (A and B), a second directional pad (C) and two shoulder buttons. The left shoulder maps to the Z trigger or the real left shoulder, as they were never used together on the N64 due to the structure of the controller. The second stick on the Logitech pad will map to the C buttons, with some threshold to determine if the analog stick is considered "pushed" or not.

The USB side is interesting. Its pretty easy to build a AVR-based USB device. V-USB is a very good software stack to turn an AVR into a USB device controller, or you can use one of the numerous chips from FTDI. For a USB host however, the options are far less compelling. As far as I'm able to tell, V-USB does not implement a USB host controller at all. I had intended to use the FTDI Vinculum VDIP1, but as I mentioned previously, the cost of shipping is prohibitive. After some more searching yesterday I found SIAM32, a software USB host controller implemented by a student project team at Cornell. I think some combination of their code and the V-USB code should be enough to implement a minimal host controller and HID class, which is all I need.

On the other side is the N64 controller protocol. Although its proprietary, its long been studied and is pretty well understood. The most useful sources of information have been tzanger's n64dev page and Micah Dowty's Cube64 project.

Its a command based protocol. The N64 sends a command, and the controller sends the response. There's no provision for the controller to initiate a data send - the N64 regularly polls the controller by sending a "get status" command, to which the controller response by sending a data packet containing the current state of the buttons and stick. There's also commands in there ask the controller if its there and what peripherals it has attached (such as a rumble or memory pak), as well as reading and writing to the memory card. Its pretty simple really, which is good - I like simple.

The connection to the N64 has three lines - +3.3V, ground and data. Obviously the power lines play no part in the communication. All that happens on the data line.

The most difficult thing about the data protocol itself is its strict timing requirements, as it typical of a serial protocol without a seperate clock. The line begins in a high (logical 1) state, held there by a pull-up resistor in the controller itself (the line is never explicitly driven high by either end).

A single byte consists of eight data bits and one stop bit. A single bit is 4 microseconds wide. To start the bit, the sender pulls the line low for 1us. The next two microseconds are either high or low, depending on the value of the bit. The final microsecond is high, then it goes again. After all eight bits are sent, the final 4us are all high to signal the end of the byte. See tzanger's page - it has some diagrams that make it easier to follow.

What this means is that however I implement this I need to be able to sample or transition the data line every microsecond. At 16MHz, that means I need to do something every sixteen cycles. Most instructions on the AVR take a single cycle to execute, so there's plenty of time to do things in between, but because I need to be able to respond to the N64 sending data within 2us, its pretty much impossible to run the USB host out of the same AVR.

So my interface has two AVRs - one doing the N64 comms, the other managing USB. This complicates things as now some mechanism is required for the two AVRs to communicate with each other.

This is the bit I'm not quite sure about. I originally thought to have the AVR ports tied together such that the USB AVR could just chuck the current state on the port and the N64 AVR could read it whenever it wanted. This is no good though because the entire controller state is 32 bits wide - sixteen for the button state, eight for the analog stick X axis and eight more for the Y axis. I don't really have the bandwidth available to do it that way, not even with the larger AVRs, which would be overkill in every other way anyway.

I've been thinking about perhaps using eight lines and sending the data a byte at a time, but at that point I've now got the two AVRs needing to coordinate communication when they could both be interrupted at any moment, breaking the whole thing. It might work if I allowed the transfer to be interrupted and in that case the N64 AVR will just use the last button state, but then this means that the USB AVR would have to be constantly streaming the current state rather than just sending updates when transitions occur. If it didn't then a state transition could be lost if the transfer is interrupted.

There's always the option of putting four eight-bit latches in between the two AVRs and storing the state there, as they can effectively be thought of as memories with seperate read and write channels. This however means slower access (external memories access instructions take two cycles in most cases instead of one) which might present timing problems, as well as requiring more board space.

I need to study the AVR datasheets to figure out if any of the peripherals it comes with can help me out. I'm sure a simple solution will present itself, I just have to find it. Fortunately the need for it is quite a way off. The USB and N64 comms need to be developed first, and they need to be done in isolation to ensure they work correctly.

So that's where I'm at. So far I'm just getting my development environment setup. This week I've built myself an AVR programmer which is working nicely, so next I need to write a few basic programs and make sure my laptop is setup properly and I know what I'm doing. Then the real work can begin :)

thursday, 28 january 2010

posted at 09:05

I was supposed to blog more often, but as usual I missed it. I'm not just going to dump everything though, but rather try and break things up a bit into more logical chunks. So today lets talk about shopping!

I've been looking around for local suppliers of parts and toys. In the past I've always just taken a trip down to my local Jaycar for whatever I needed, but I'm increasingly coming to understand that their range is actually quite limited for what I want and the prices are quite expensive. So I've naturally turned to the internet for help.

The first stop was RS Electronics. They're one of the big industry suppliers, and have a warehouse in Port Melbourne (which means fast shipping). Somehow they also offer free delivery. I placed an order with a couple of weeks ago as a tester, though still for stuff I needed: a couple of AVRs, a PIC, and some 20Mhz crystals. They had the chips in stock and they arrived by courier the next day. The crystals were only available in their UK warehouse, so they didn't arrive, but neither did they bill for them straight away either. I was a little confused as to whether or not they were still on order or if I had to do something else, so I sent an email. A couple of hours later I got a phone call from a lovely lady who apologised for the confusion and said she'd make sure the order was still good. Obviously it worked, because a week later my parts arrived. So +1 to RS. Mid-range prices, fast free shipping, excellent customer support.

Now I had 20Mhz crystals for the PIC, but I was still looking for 16Mhz crystals for the AVRs. I couldn't initially find anyone that stocked them locally, but eBay had the answer. The ALLEPARTS store operates out of China and has bulk components for a pittance and free shipping (how do folks afford this, I don't get it). The paltry sum of $8 resulted in a pack of 20 crystals arriving a couple of weeks later. That's an insane price - Jaycar charge $5 PER CRYSTAL! There are other very similar stores on eBay, but I'll probably end up back there because now I know them.

At this point I had enough to get on with the first stages of my next project, which I'll write about soon. While waiting for parts I spent a lot of time trawling for other suppliers, and found all sorts of stuff along the way.

First, the venerable SparkFun. They seem to be near the centre of the hobbyist electronic world, supplying lots of common and uncommon parts and kits, and working hard to make parts that are difficult to get or to work with accessible to mortals (eg by building breakout boards). There's tutorials and forums and all manner of things. I've spent a lot of time here reading things (particularly the Eagle tutorials) and generally lusting after things, and I probably would have laid down a chunk of cash pretty quickly if it wasn't for the insane price of shipping to Australia, a topic I'll rant about soon. Not being able to just buy stuff immediately forced me to continue looking locally for suppliers, but also to really consider what I actually need.

For example: I'm mesmerised by the Arduino, and I'm of course not the only one. My first thought was that it would be the perfect platform for learning AVR stuff, and so I determined that this was what I needed. Upon further study, I started to realise that while it is very very cool, its not at all what I want. Its a great tool for rapid development, and its been positioned so that non-technical people can use it too, but from what I can gather, its has way more overheard than I want. I'll get into it more when I talk about my new project next time, but I need some very specific hardware with insanely quick response times. The code is likely going to need to be cycle-counted to work properly.

I could do this with the Arduino by bypassing its firmware and development environment and else and just using it as an AVR board, but by the time I do that I've removed all the things that make it special and worth the extra cash. Its not hideously expensive but if you hadn't already gathered, I am (for a variety of reasons) working on less than a shoestring budget. I can't justify the cost. But I'm still very interested in the platform, and I'm thinking about buying the starter kit for a tech-savvy friend that hasn't done any kind of electronics stuff before.

Lets talk about international shipping. Its insane. There's a world of interesting and reasonably priced parts that become inaccessible because the price of shipping is often more than the price of the part itself! My current hate is the FTDI Vinculum VDIP1. Its a brilliant little module that acts as a USB host controller. This week I could just find $40 for the part itself, but I can't justify $80 once shipping from the UK is factored in. I haven't yet found a local supplier that prices it reasonably; RS have it for $70 which is slightly better but still more than I want to spend. Honestly, the freight plane pilot could put this thing in his pocket; that's how small it is. How can that kind of cost be justified?

PCB manufacture is another thing that's going to hurt. I've been spending a lot of time in Eagle and it makes preparing PCBs a snap. As I mentioned previously, I planned to try BatchPCB to get my DAC board fabricated. I sent the order in a couple of weeks ago. The board itself came to a quite reasonable US$15. By the time handling and shipping was factored in, it blew out to a cool US$50. I made the order anyway, because I want/need the board and I'm treating it as a trial, but its only going to be something I can do for complex designs. This seems to just be the nature of the industry though; board manufacture isn't cheap on small scales. I have heard good things about Seeed Studio and their efforts to make this sort of thing more accessible, but I haven't quite figured out how they work yet.

That's the shipping news. Lets get back on to buying things.

So local stores! I've managed to get a few recommendations from the local HackerSpace group (what an awesome idea, can't wait to get more involved with this). Little Bird Electronics appear to be a local SparkFun reseller, though I think they have a few other bits. I'm intending to buy a Bus Pirate from them soon, as it looks like an incredibly useful bit of kit to have on the desk.

Via Jon Oxer at Practical Arduino I found ProtoStack. They don't have a huge range, but they do have all the "essentials" for microcontroller hacking. Here's a tip though - make it clear on your website which country you're in. If I'd come across this site on my own, I probably wouldn't have looked at it in any great depth because the prices are in US dollars, the site is a .com, etc - I would have gone "crap range, US = crazy shipping, ignore". It was only because Jon's video mentioned that they're in Australia that I took the time to look at them in depth. Shipping is only $5 so on the occassion that I need something they have, I will be buying from them.

Anyway, that's about all I have about shopping for now. I have a bunch of other local sites bookmarked, but I haven't done anything more than a cursory search on most of them. I'm starting to get a list of affordable places to buy things together, though there's still a couple of bits I need to find. Fortunately I now have enough parts to do a good portion of my prototyping. That's a story for next time!

sunday, 17 january 2010

posted at 22:20

Here it is, first post of the new year on the last day of my almost-four-week break from work. The time off has been awesome because I've gotten so much down. Apart from various parties and outings and other festivities, most of the first two weeks were spent organising the garage in a pretty serious way. I took almost all of my old computer gear to the local tip (who are participating in the Byteback program, making my culling reasonably environmentally-friendly). This is a pretty big thing for me, as I'm a hoarder and had kept all sorts of stuff (mostly for sentimental reasons) dating back to 1982. I've kept one working model of every computer and console I had for posterity and/or hacking, but have thrown all the extras and all the PC stuff I had that I'll likely never use. I've kept a few things that might have some actual monetary or other value as collectables, and I'll put those up on eBay when I get around to it.

As a result of all this, my garage has just about nothing in it, so I've set up a desk and sorted all my various electronics bits, tools and whatever so they're all nicely labeled and accessible. I even put in some halogen lights so the whole place is extremely well lit and I can see what I'm doing. And no computer in sight, though I am dragging the laptop out there with me.

All of this is to support this weeks' new hobby, which is getting back into hardware hacking in a pretty serious way. I'm not sure if its as a result of general burnout or because I'm now write code for my job rather than hacking to support my job as I was previously, but I found towards the end of last year that I just had no brain for code by the time I got home. I'm thinking that perhaps hardware is close enough to what I know to hold my interest and not be completely impossible, but different enough that there's room in my brain for it. Time shall tell.

Anyway, back in May I bought a RGB-to-HDMI converter and did some work to get my Amiga going on my LCD TV. As I mentioned then, my next project was to get RGB out of my Nintendo 64 so that I could play it without it looking horrible. I began work on what seemed like a fairly modest project: to build Tim Worthington's N64 RGB DAC using discrete logic rather than a CPLD (which at the time seemed way to complicated).

At the time I didn't really want to commit any money to this project as I didn't know if it was something that I was actually capable of doing. Since I had some stripboard, connectors and most of the other parts I'd need I opted to just build the thing on stripboard and buy the few chips and resistors that I'd need.

In hindsight this turned out to be the wrong decision. Routing data buses on stripboard means a lot of bits of wire flying around, and it doesn't help that the board has to be small to fit inside the case. Over the course of the next couple of months I got perhaps three-quarters of the way there, and after a big effort in the last two weeks I produced this monstrosity:

Yeah, I know. There's more pics on Flickr but it doesn't get any prettier.

There's not much to it. Its four data latches (one for each of red, green and blue and one for the control lines), a couple of support chips and three R2R ladders.

In spite of the mess I still had high hopes for it, so I hooked it up and to my great surprise it (sorta) worked. Here's what Super Mario 64 looks like on my TV with the standard composite cable:

The major thing I'm trying to fix here is the weird "hatching" effect on high-contrast edges (like the life/star counters). Its not bad in static screens, but once things start moving its a horror on the eyes; its pretty much impossible to play.

But, with the magic of wires, we get this:

As you can see, everything is nice and crispy, which is was the desired result. Some of the colours are off though, which obviously isn't right.

Another example, this time from Yoshi's Story:

Composite:

RGB:

I haven't had the chance to really think about it in depth but with the way that the colours are generally brighter and Mario's clothes are washed out, and the way the other colours appear, my gut feeling is that I've wired the inputs to the R2R ladders wrong in such a way that they're off by one bit. With the board being so insane though I figured I have pretty much no chance of debugging it and even if I do figure it out its going to kill my fingers to try and make any changes to the board.

Actually getting the damn thing to work though has given me a lot of confidence and so I've decided to build it again, but this time done right, which means a real PCB. So over the last week I've been teaching myself how to use Eagle, a circuit and PCB designer that seems to be pretty popular. The learning curve is pretty steep, but I've made some good progress with it.

The first thing you do is draw the circuit. I've pretty much just copied Tim's design, getting this:

Next comes the board layout. Its pretty straightforward: setup the board size, place the components, then add all the wire routes to the board. The last bit is made simple using Eagle's autorouter. Various forums and whatnot suggest that real board designers don't use the autorouter, but I don't care - it seems like it will work well enough and I'm just a noob here so I'll take all the help I can get.

I also found a wonderful little program called Eagle3D which produced 3D renders of Eagle boards, including components. I ran mine through it to see what it would look like and got this:

Top side:

Bottom side:

I'm feeling pretty good about this! I'll sit on this for a couple of days just to make sure I've got it right, then I'll send it off to BatchPCB, a PCB fabrication service that will do short runs (even one) for reasonable prices.

I've no doubt that I've missed something, and it won't work properly the first time, but at least this board can be debugged. I see some good looking games in my future :)

thursday, 10 december 2009

posted at 21:20
tags:

Day two of training today. I had to run off early, but not before getting a crash course in Moose. I've been watching Moose for a couple of years, and have a project in mind for it, but haven't got around to doing anything with it yet. Doing a few basic exercises with it was awesome just to see what it can do, but I did manage to get frustrated by three things within the first ten minutes.

  1. The first thing I noticed is that every attribute accessor created is publically available. There isn't really a way to make an attribute that is read/write from inside the class but read-only from outside. The fine manual suggests this:

    has 'attr' => (
        is     => 'ro',
        writer => '_set_attr',
    );
    

    This works ok, but its still possible for code outside the class to call _set_attr directly. Until we get lexical subs its impossible to make the writer method invisible to the outside world, but until then I'd still like it to be possible for Moose to produce an accessor that can check its caller.

    In a similar vein, its not possible to create proper protected or private attributes. Private attributes can sort of be done by assigning directly to the object hash:

    $self->{attr} = 1;
    

    I don't like this because it because it makes assumptions about the internal implementation of the objects (and with Moose I'd like to remain as ignorant as possible on this point), but also because it provides no type or constraint checking.

    Protected attributes (that is, attributes private to a class and its subclass) seem to be completely impossible.

  2. By default, a Moose class quietly accept any and all parameters passed to its constructor, regardless of whether or not they correspond to an attribute in the class or its parents. This confused me for a moment as I've come from Params::Validate which allows you to declare parameter types and constraints much like Moose attribute declarations, but dies if you provide a parameter that is not declared. The fine inhabitants of #moose on irc.perl.org pointed me at MooseX::StrictConstructor, which does what I want - dies if undefined parameters are provided.

    It gets better though. I was declaring an attribute that I wanted to be impossible to initialise via the constructor as I planned to set its initial value in BUILD, and to allow the user to provide a value only to ignore it is confusing. The manual explains that specifying init_arg => undef in the attribute definition will arrange for that parameter passed to the constructor to be ignored, but again, it does it quietly.

    It turns out (again via #moose) that combining MooseX::StrictConstructor with init_arg => undef yields the desired results. I can live with that, but I would never have anticipated that result from the documentation. Hmph.

  3. Moose doesn't provide any syntactic sugar for class attributes/methods. A quick search just now turns up MooseX::ClassAttribute which will probably be as much as I'll need, at least initially, but I was surprised that core Moose didn't have anything for this. Are class attributes so uncommon?

At the end of the day though, these are all pretty minor nits. Moose is awesome. Its very actively maintained and developed by a number of incredibly smart people, so its not going away any time soon. I'm looking forward to having the time to do something serious with it.

thursday, 10 december 2009

posted at 11:26
tags:

Work is sending me on a Perl Training Australia course this week, so I'm getting to hang out with Paul and Jacinta and get a good refresher on Perl OO. I wouldn't say I needed it, but I've been enjoying the discussion and it never hurts to make sure that your accumulated understanding matches the current reality.

One of the exercises involved a class representing a coin with methods to flip the coin. One of the things we were asked to do at one point was to create an array of coins and do various things to them. My first instinct to create the array was to do this:

my @coins = (Coin->new) x 10;

I was saddened but not surprised to find that this doesn't work. As the following test demonstrates the left hand side is only evaluated once and then just copied, so I ended up with an array containing ten references to the same object:

$ perl -E '$c = 0; @x = ($c++) x 10; say @x'
0000000000

The best I could come up with is this, which I don't think reads anywhere near as well:

my @coins = map { Coin->new } (1..10);

We briefly discussed whether it would be worth developing a core patch to do something like it, but realistically the only option that preserves a reasonable amount of backward compatibility is to only reevaluate the left side for a very specific set of types, namely code references, giving something like this:

my @coins = (sub { Coin->new }) x 10;

Given that that really doesn't read particularly better than the version using map, and not knowing if anything smarter is possble (and how to do it if it is), and knowing that the core developers aren't particularly keen on new features to existing constructs at the best of times, I've opted to leave it for now but keep my eyes open for things like this.

One part of another exercise had me dealing with decks of cards. Internally I represented suits as integers, with the following list to assist with the as_string method:

my @suits = qw(hearts spades clubs diamonds);

When I got to adding the initialiser for the class, naturally I wanted to be able to specify a string. The usual thing I'd do here is create a hash from @suits with the values as the integer array indexes. This time I came up with this one-liner to determine the index of a value in an array:

my $index = do { my $found; grep { $found = 1 if $_ eq $needle; $found ? 0 : 1 } @haystack };

It plays on the fact the grep in scalar context returns the number of matches; that is, the number of times the code block evaluates true. All this does is arranges it such that the block is true for every array index before the wanted value but false for every index after (and including) the wanted index. If $index == @haystack, then it wasn't found.

Its certainly not optimal - a binary search is always going to be quicker, and you'd nearly always want to use the hash method if you were doing it many times, but it was certainly fun to write a cute oneliner to do it.

monday, 2 november 2009

posted at 08:17
tags:

A couple of months ago I started work in a new job, doing various programmery things for the Monash student intranet thing The job description says "web programmer", but there hasn't been much web yet. At this point I'm mostly concentrating on the glue code needed to hook up various Google services to our environment. 99% of the code I'm writing and working on is Perl.

I've been using Perl for ages though. For the last nine-and-a-bit years I've been a mail sysadmin at Monash, and while the "core" of our systems has always been full-on professional mail software packages (both proprietary and open-source), all the bits in between have always been Perl. We've written all sorts of stuff, from full web environments and workflow packages to all the traditional sysadmin tools like log parsers, report generators, config builders and everything in between. There's never been any question for us - Perl is just so obvious for this kind of work.

Previously though, Perl was merely a tool that I used to get the job done. In many ways though its now become the job itself. For any given day I can be reasonably confident that most of it will be reading or writing Perl code, whereas before I'd only bust it out when I needed it. Additionally, I haven't really done this type of work before so I'm getting lots of ideas for stuff I want to play with on my own time and also finding gaps in my knowledge that I want to fill out. So now I'm finding just about every moment I'm at the computer I'm doing something with Perl, far more than ever before.

So, I've decided that it would be really good to finally join the mob rather than just hang around the edges looking in. I'm signing up for the Iron Man challenge to keep me honest, and I'm moving my code to Github for a bit more visibility. This is going be an interesting change for me, as in both blogging and coding I'm used to producing something large and fully-formed before showing the world, but obviously that doesn't work if you need to post once a week. I've started making a list of little Perl things I can write about in a couple of paragraphs, so hopefully I'll be able to keep it fresh.

Additionally, I've committed myself to writing everything I possibly can in Perl. I have a long history with C as well, and for the longest time always reached for it for anything closer to the hardware/OS (a fuzzy line, but typically that means server-type things). No more. From now on unless there's a very specific reason why Perl is unsuitable, I'll be choosing Perl for my code.

So that's it. Hi :)

thursday, 22 october 2009

posted at 13:33

As is often the case with me, what started as a small hack to blosxom to make it do tags and per-tag feeds turned into me rewriting it from the bottom up. I quite like what its become, though I doubt its of much use to anybody but myself. Give me a yell if you want the code.

Anyway, now the whole site has proper tags, and you can use them to subscribe to just bits of my ramblings rather than the whole lot, which should make a huge difference considering how much I don't actually write. Oh, and there Atom feeds too if you like that sort of thing.

To celebrate this momentous occasion, I've moved the whole mess to a new and somewhat relevant domain, eatenbyagrue.org. I think I got all the redirects right, so existing subscriptions should work ok.

That's all. Back to work shortly.

tuesday, 29 september 2009

posted at 21:16

I love blosxom, so I persist with it, but its a nightmare to refactor. It could have done what it does just as well without being quite as clever. I wonder if that's why development on it is mostly dead. In any case, the refactoring goes well, and I hope soon to have something that does exactly what I want (which I'll talk about more soon; better to spend my time on code right now).

thursday, 24 september 2009

posted at 22:59

Tonight I wrote a simple tag plugin for blosxom (the blog engine I use), imported all the categories from the old Wordpress blog as tags, and wrote some CSS to make it work properly. There's still a bit more to do, notably making the tags into links that take you to ther posts tagged the same thing as well as getting per-tag feeds going (though the RSS plugin needs a lot of work anyway).

The plugin, for the curious:

package tags;

use vars qw($tags);

sub start { 1 }

sub story {
    my ($pkg, $currentdir, $head_ref) = @_;

    $tags = '';
    if ($meta::tags) {
        my @tags = split /\s+/, $meta::tags;
        $tags = "<div class='tags'>tags: <ul>";
        $tags .= "<li>$_</li>" for @tags;
        $tags .= "</ul></div>";
    }
}

1;

Obviously, its really the meta plugin that does most of the heavy lifting.

So now at the top of a post I write something like:

meta-tags: site perl

and tags pop out. Lovely!

friday, 18 september 2009

posted at 22:25
tags:

Ooh, it has been a while, and a lot has happened. If I wrote properly about everything, I'd be here all day, so here's my usual long list of stuff I've been up to. Much of it is interrelated in strange ways, so this is not necessarily chronological. You'll get over it.

  • pyro, my trusty laptop, suffered a massive hard drive crash about a month ago after a bungled gparted session. I have backups of all the really critical stuff, so it wasn't a complete tragedy, but there's plenty of stuff that I may have lost. I say "may", because via the magic of e2extract I seem to have determined that while the directory structure is completely destroyed, most of the data is intact. I currently have a 120GB filesystem image sitting on an external hard drive waiting for me to get around to sifting through it for data.
  • as a result, I haven't really touched any AROS code since then. Unless I can save my git pack files, I actually have lost quite a few of my private development branches (anything not saved to repo.or.cz. The hosted rewrite is still there at least, should I ever decide to get back to.
  • so, laptop rebuild. Its actually much better put together now, I think. The Linux side has less warts than it did before, and most of the hardware seems to be working just fine. On the Windows side I installed Windows 7, and I'm actually quite impressed with it (and I've never been impressed with Windows in the past). It seems to perform much better than XP ever did (I never tried Vista).
  • I bought my wife a new laptop, and installed 7 there too, and she's very happy with it too.
  • not one to make the same mistake twice, the very next day after the crash I went out and bought a 1TB external drive and bolted it to our home server and setup automatic backup regimes for both our laptops. This is particularly good for the wife, who has several gigabytes of photos from the last five years of our life (since the kids have been around).
  • without much motiviation to salvage my code, and with work proving rather taxing on my brain, I started spending my evenings loading up Windows and playing Left 4 Dead. A lot. They recently released support for third-party maps and stuff, so my brothers and I have been trying some really great new campaigns and having a lot of fun with it.
  • then the really interesting bit happened. I was approached by another area within my division and was asked if I was interested in coming across to work on an extremely interesting new project. I can't go into details yet (the whole thing is still tied up in legals), but essentially I'm getting paid the same to cut code and design systems, with no management responsibility of any kind. This is something close to my dream job and something I never thought I'd be able to get at Monash, so after a token amount of consideration I left my job of nine years and took the plunge. Today is my fifth day and so far I'm loving it - the work is interesting, the enviroment is comfortable and the mood among my coworkers is very happy and light, which makes a huge difference from the doom and gloom that permeated my former life. I do miss my team though - I've worked with some of them so long that they've become like family to me. At least they're only a short walk away, so I can still get a coffee with them from time to time.
  • for various reasons I'm not allowed to keep my laptop, which belongs to my old department (though they're kindly letting me continue to use it for a little while), so my nice new job have bought me a nice new laptop. Christened junai, it arrived yesterday and I should be able to bring it home and switch to using it full-time today. Cloning a Debian system is trivial, by the way.
  • the new job is all Perl, which I unfortunately haven't had much time for lately, so as well as enjoying the day I'm finding I'm getting home and wanting to work on that too. Since it fits in with work doing it on my own time is actually just more of the same rather than being yet another thing I have to try and fit into my life. As such, I'm thinking I might spend some of my spare time getting a pile of unreleased code into shape and onto the CPAN, and maybe, just maybe, getting along to the occassional Melbourne.pm meeting.

I think that's everything. The other thing I'm going to try and do is blog a little more often, probably by lowering my standards a little and not writing an epic tome each time :)

monday, 27 july 2009

posted at 19:25
tags:

The last couple of months have been busy but I've managed to find bits of time here and there to hack on the new AROS hosted port. Last week I really got the guts of the task switching and interrupt code working the way I wanted, which is what I'm here to tell you about today.

Task switching in a typical multitasking system is very simple in concept. Imagine a computer running a single task. There's a big pile of instructions in memory somewhere, and the processor just runs them in sequence. It will keep doing that until something stops it. That something is the most important requirement to make preemptive multitasking work.

What usually happens (again in very simple terms) is that there's an extra bit of circuitry somewhere in the computer that works as a timer. Every now and again (though tens or hundreds of times a seconds), it will prod the CPU. In response, the CPU will stop what its doing and go and run a different bit of code somewhere else in memory. The "prod" is known as an interrupt (or Interrupt Request (IRQ)), and the bit of code that runs is the interrupt handler (or more formally, the Interrupt Service Routine (ISR)). Its the handler's job to arrange for a different task to run.

Something the CPU will do when responding to the interrupt is to save its complete state (known as the context) before it calls the handler. That is, somewhere in memory (typically on the stack) it will save a copy of all its registers, the stack pointer, the program counter and everything else it needs to continue running the program from where it was stopped. This is necessary as the handler will need to use those registers in order to do its work. Many CPUs provide a single instruction to restore the entire CPU state in one go.

To make task switching work, the interrupt handler will take a copy of the context and store it inside the OS task state, which usually contains lots of other info about the running task, such as memory it has allocated, files it has open, etc. Then, the handler chooses another task based on some criteria (this is the scheduler). Finally, it copies the saved context from the state of the task to wherever the CPU needs it, then tells the CPU to reload the context and leave the handler. The handler "returns" to running the newly selected task. This process contiues ad infinitum and you get the illusion that your computer is doing lots of things at the same time.

The existing Unix-hosted version of AROS does fundamentally the same thing, but in a highly convoluted way. The main thing to note is all tasks run inside a single Unix process, which then does some deep magic with Unix signals to make interrupts and task switches are happening. The kind of magic employed is highly OS-specific, and although I don't know exactly why it was done the way it was, I can guess that it was one of:

  • The facilities for user-space task switching weren't available or were incomplete when it was first written (I know this was the case for Linux)
  • Originally AROS was much more tightly integrated with the Linux desktop (eg one AROS window per X11 window, etc)

Times have changed though, and so what I'm trying to do is make a new port that is designed to be much closer structurally to its native cousins. I'm realising this through a number of mechanisms provided by POSIX: threads, signals and the ucontext set of functions (though somewhat ironically these have been removed from the latest versions of POSIX and SUS).

What I do is this. I create a thread to mimic the function of the timer interrupt delivery circuit. It sits in a tight loop, waiting a little while then sending a signal to the "main" thread. This obviously mimics the the interrupt that would exist on a real system, and causes the main thread to stop what its doing and jump to a signal handler.

When a signal is delivered to a Unix process, the kernel saves the current process state (context) onto the stack and then calls a signal handler function. When the handler returns, the kernel reloads the state from the stack and continues from where it was. This sounds like almost exactly what we want, except Unix typically doesn't provide a portable way to get at the saved state on the stack. The existing hosted AROS implementation for Linux uses a bunch of Linux-specific knowledge to dig into the stack and get the data it needs, but thats obviously not portable. These days however, we have the ucontext functions which, while not without their quirks, are far more useful.

The prototypes look like this:

  • int getcontext(ucontext_t *ucp);
  • int setcontext(const ucontext_t *ucp);
  • void makecontext(ucontext_t *ucp, void (*func)(), int argc, ...);
  • int swapcontext(ucontext_t *oucp, ucontext_t *ucp);

For those who've seen setjmp() and longjmp() before, getcontext() and setcontext() will be quite familiar in function. getcontext() takes a copy of the current process state, including the CPU context, and drops it into the memory pointed to by ucp. setcontext() restores the process state and CPU context from whatever is saved in in ucp, effectively causing a direct jump to the point just after the getcontext(). What this means is that you get the appearance of setcontext() never returning, whereas getcontext() can return multiple times. Interesting times indeed.

makecontext() takes an existing context and modifies it such that when setcontext() is called on it it will jump to func with the arguments specified on the on the stack. You actually need to do a bit of fiddling inside ucp before calling it, to setup an alternate stack for the context to run on and so forth. For the most part this call is not particularly useful except when setting up.

Finally, swapcontext() is an atomic context get-and-set. That is, it does this:

getcontext(oucp);
setcontext(ucp);

except that a later setcontext(oucp) will return to the point after the call to swapcontext().

Armed with this knowledge, we can now take a look at the (slightly simplified) implementation. The task switch "interrupt" handler, is a two-stage process. The first part, which as far as the Unix kernel is concerned is the actual signal handler, looks like this:

ucontext_t irq_ctx;
char irq_stack[SIGSTKSZ];

void irq_trampoline (int signo, siginfo_t *si, void *vctx) {
    getcontext(&irq_ctx);
    irq_ctx.uc_stack.ss_sp = (void *) irq_stack;
    irq_ctx.uc_stack.ss_size = SIGSTKSZ;
    irq_ctx.uc_stack.ss_flags = 0;
    makecontext(&irq_ctx, (void (*)()) irq_handler, 0);

    swapcontext((ucontext_t *) GetIntETask(SysBase->ThisTask)->iet_Context, &irq_ctx);
}   

(irq_stack is initialised during startup as irq_stack = malloc(SIGSTKSZ))

So the signal from the timer thread arrives, and the current task gets interrupted and we arrive here. The getcontext() and makecontext() bit sets up a new context that, when called, will call the actual interrupt handler (ie the scheduler etc) and select a new task.

Its the call to swapcontext() that is most interesting. What this does is save the current context into the current task structure, and switch to the interrupt handler proper. The handler calls into the scheduler to choose another task then calls setcontext() on its saved context to start it up. The subtlety is in the fact that when the saved context is later used to start the task up again, it will return to the point just after the call to swapcontext(), immediately drop off the end of the signal handler and head back to where it was.

You might wonder why the more obvious method of using getcontext() to save the context then calling the scheduler directly isn't used. The problem comes from the fact that when getcontext() "returns", the caller has no way of knowing if it was the initial call to save the context, or if it was as a result of setcontext() being called. Without this knowledge, we're left to this kind of trickery so that the only time we end up after the context being save is when the context is reloaded.

(This is the opposite of setjmp(), which returns zero from its initial call and non-zero after a call to longjmp(). It perhaps makes the code easier to read to just have a call and test to determine what to do next, but its slightly slower and it would also result in the handler being run on the task stack, which means making the handler more complicated to make sure it rewinds correctly when the task is switched back. Or tricks can be played with sigaltstack(), which further complicates things.

The actual implementation is naturally a little more complicated, mostly because it has to deal with so-called "system calls", which is what happens when an application triggers a task switch (eg by calling Wait()). To allow that, each interrupt signal carries a numeric id that allows the trampoline and handler to determine what type of interrupt was requested. Then, when Exec wants to force a task switch, it will trigger the interrupt requesting it, which will make the scheduler with the main task "stopped", as above, but with slightly different semantics. It doesn't add much code though, and the technique is identical.

There's still lots to be done to clean up the scheduler, which so far is a hack job of the hack job already present in the mingw32 port. The next thing to do is continue to work on the boot sequence, which is almost there but is just a tiny bit finicky at the moment (that's a technical term). Next time I think I'll write about the new host module setup which blows hostlib.resource out of the water (if you know what that is)!

monday, 29 june 2009

posted at 22:09
tags:

My current bus activity is AROS hacking. I've actually been doing at least an hour a day for the last couple of months, so I'm making plenty of progress, but I'm off on a long and exciting tangent so it all seems quite different to what I was doing before.

I started thinking about what it would take to make cairo a "first-class" graphics system, sitting directly on top of the graphics drivers, bypassing graphics.library completely. This isn't a crazy idea - a major part of graphics.library is providing drawing and font rendering primitives, similar conceptually to what cairo does (though cairo is of course far more advanced). My thought is that we make the graphics system at the bottom of the stack for apps do all sorts of crazy compositing and whatever other eyecandy effects, and the whole desktop benefits. Initially it could operate alongside graphics.library, but it'd also probably be reasonable to implement graphics.library functions on top of cairo at some later time.

From there I started looking at the graphics driver API. What we have works well enough (despite the deficiencies that I've complained about in the past), but its not a particularly good fit to the cairo backend API, and from what I understand, not a great match for a modern 2D hardware interface either. So the next thing I started thinging about was to change the graphics drivers to have the exact same interface as the cairo backend API. From there, a driver and/or the hardware could directly accelerate cairo drawing operations. The cairo software fallbacks are pretty heavily tested and optimised (including some tight assembly versions of things where necessary), so I'd expect that even a graphics card or whatever that doesn't offer a lot of function could still go faster than, say, the current SDL driver (which uses the graphics.hidd fallbacks for just about everything currently).

So now I'm looking at drivers. As you know, I work in hosted, so my two examples are the X11 and SDL drivers. Something I hate about the X11 driver is how closely tied to the underlying kernel implementation. I took some steps to deal with this when I wrote the SDL driver with hostlib.resource, but its not perfect, and lately something has changed in the X11 driver to require it to be linked with the kernel once again. Besides that, the X11 driver is ancient, hailing from a time where AROS windows were X11 windows, and it retains a lot of that structure even though its no longer the way the world works. Also, it relies on the X11 "backing store" feature, which is usually disabled and will shortly be removed from Xorg. In short, the thing needs a rewrite.

So yay, rewriting one, maybe two, graphics drivers. Down a level to figure out what's going on the core, and sure enough, more work required there. In the last few years the structure of an AROS kernel has changed to be a minimal kernel.resource which implements the absolute minimum required to initialise the memory and task-switching hardware and hand control to exec.library. The loader (typically GRUB) can optionally get whatever modules (libraries, resources, devices, etc) into memory and make them available to exec when it starts. This is the basic idea behind the so-called "modular kernel", which has been implemented in the x86_64, Efika, SAM (both PPC), and more recently, mingw32 ports. The only ports that don't do this are the first two - Linux hosted and i386-pc.

The mingw32 port is particularly interesting. Its a hosted port to Windows, and in essence uses the OS threading system to implement a minimal virtual machine, all within kernel.resource. It has a small bootloader that loads an ELF kernel, making it so that stock AROS i386 code can be used even on Windows which doesn't use ELF itself. The other thing it does is neatly split modules into host-side and AROS-side parts. The AROS parts are handled as normal modules, but in their initialisation they call into hostlib.resource (which is now contained within kernel.resource) to load and link the host-side part. These are standard shared libraries (ie DLLs) which can bring in any library dependencies they need, neatly avoiding the problem contained within the X11 and SDL drivers in that its kinda painful to find the needed libraries at runtime. This way, you just find what you need at link time.

And so, after all this, I'm doing a new port of AROS to Linux, based on the structure used for the mingw32 port. I'm improving on it a bit though. There's still too much arch-specific code in exec.library (like thread context manipulation) which I'm hiding inside kernel.resource. I'm also adding a host.resource which will provide ways for modules to hook into the system main loop inside kernel.resource to do things like "virtual" hardware and the like (ie faking interrupts and such). The mingw32 port did this via special architecture-specific calls in kernel.resource, but I want to try to make kernel.resource have a standard interface across all ports, so they can all run an exec.library that is substantially the same.

So that's some kind of plan. I'm currently at the point where the kernel.resource boots and gets exec.library online. The next thing I need to do is reimplement my task switching and interrupt core which I never tested. If you feel like googling something, it turns out that ucontext_t is not particularly easy to copy or cache on Linux due to the Linux people messing up the way they store the floating point state. I need to rewrite it based on the wonderful context_demo.c example, which never requires an explicit context copy and should do much better. After that I should be able to hook DOS up and get something interesting happening.

I'll keep working and maybe let you know some more in another month or two :)

sunday, 28 june 2009

posted at 20:42
tags:

Yeah, its been a while. I'm still here, and I've done heaps of stuff since last time, but I just haven't gotten around to writing about it yet. I'll get there.

What I'm here for tonight is to tell you about something new. I know there's people out there blogging about AROS. I'm subscribed to a few of them myself. I'm sure I haven't got all of them though. So I'm putting together a planet to list them all:

If you're trying to follow what's going on with AROS, it'll be good for you to subscribe to this planet, as you'll find out everything that's going on. If you're blogging about AROS, it'll be good for you to be on this planet, as you make sure that everyone is reading your stuff and you benefit from other people's popularity.

If you write about AROS, email me (rob@cataclysm.cx) or ping me on IRC (fce2 on irc.freenode.org). Let me know the location of your RSS or Atom feed, and I'll add you. Its cool if you have non-AROS stuff in there, this is about AROS people as well as AROS itself.

If this gets big and popular, I'll see what I can do to get a better URL. How does planet.aros.org sound? :)

Oh, and I need to do something to pretty it up a bit. If you feel like doing something there, drop me a line.

saturday, 9 may 2009

posted at 08:16

Gub and Penny went out with a friend last night, so with the girls in bed I got a solid five hour block all to myself. At the end of the night I'd achieved something I'm quite proud of:

This is my ancient Amiga 500 hooked up to my nice tv via the RGB-to-HDMI converter I got for my birthday.

I built two cables. The first is the main one that I'll need for all the things I intend to do with this converter. It turns the somewhat unwieldy SCART input on the converter into a rather more convenient DE9 connector. I've done my own version of the Game Station X connector - I've left out the Luma and Chroma lines but added a ground line. So in short, this cable is carrying the red, green and blue video lines, left and right audio, ground and a +5V line - everything I need.

There's not much to it. The only tricky bit is that SCART has a line (BLNK on pin 16) that selects whether the input is composite or RGB. The line needs to be fed 1-3V to select RGB; leaving it unconnected gives composite. Connecting it to the +5V line via a 180-ohm resistor makes a nice voltage divider and provides the needed signal.

The second cable is one that takes the video off the Amiga and turns it into my custom format. I originally intended to modify the cable I built years ago to connect the Amiga to my 1084S monitor (which died long ago), but on opening it I found that the cable was a five-core cable and wasn't carrying the +5V line. A quick dig around in the garage revealed a short length of six-core phone cable, so I used this to build a new cable. I also had to cannibalise a 2xRCA-2xRCA cable to provide the audio (which doesn't come through the video port on the Amiga but instead via two RCA ports), so I now have this rather peculiar looking plug with two distinct cables coming out of it.

The most surprising thing about all this is that it worked first time. I dabble but I am most certainly not an electronics guy. There was a few tricky bits where I just took the option that seemed most obvious, but I really expected it not to work because I hadn't understood some obscure detail. I'm excited that I'm able to do this! I keep old game systems around because I like to play the games from time to time, but most people when hooking their old machines up to a new tv and seeing it looking crap would be powerless to do anything about it. I like that I know enough to be able to buy or build things that can make it work!

It does seem that my Amiga has suffered from its long storage. Half the time it doesn't start at all, and sometimes it crashes at boot:

But damn, that text is crisp!

Next is the Nintendo 64 mod to get the RGB lines out. I'm hoping to find some time for it this weekend.

monday, 4 may 2009

posted at 11:39

I'm pretty sick right now and was just getting ready to go to bed when someone knocked on the door and handed me a package:

Its my birthday present!

Inside, a plain unbranded box. China's very finest:

Inside that, a few things: a single page "manual", the converter unit itself, a custom breakout cable for component & audio input (eg from a DVD player) and the power supply with an awesome giant UK plug, egads.

Front and back sides of the converter unit:

This afternoon I'll go around to Dick Smith and grab a power converter so I can plug it in and then give it a test with the DVD player. If I get some time tonight and its not too cold I'll camp out in the garage and start trying to build some cables.

sunday, 3 may 2009

posted at 22:49

Long ago I wrote a SDL driver for AROS hosted. Back then I wrote about it being slow because of deficiencies in the driver interface that require flushing the output to the screen for every pixel plotted by software fallbacks. Go and read that first.

I never did finish my implementation originally, but in the last week I've resurrected the branch and completed it. Its taken adding an UpdateRect method to the bitmap class and then modifying graphics.library to call it after ever operation. If its running a tight loop to paint a rectangle or something, it will call this once when its finished to push its output.

To test, I removed all the "accelerated" methods in the SDL bitmap class, leaving only GetPixel and PutPixel. Back when I first writing sdl.hidd this was all I had implemented, and it worked fine, but was slow enough that you could watch the individual pixels appear on the screen. With the UpdateRect stuff its now very usable. Its not blinding fast, but its snappy enough to be comfortable.

And the best thing is that no changes are required to existing graphics drivers. For those, the call to UpdateRect will just use the baseclass version, which is a no-op. I've confirmed this is actually the case with the X11 driver, so yay.

I'm not sure what's next for my hacking. I'm really just studying graphics.library and graphics.hidd at the moment, trying to get my head around how it all fits together. Something big is coming, I'm just not sure what it looks like yet :)

tuesday, 28 april 2009

posted at 08:39
tags:

It would appear I'm back in the AROS game for a little while. I got a nice email asking for some help with fat.handler so I decided that I'd look into it. In the last 18 months a few things have been broken in things that I care about which were causing my particular configuration to fail to build, so I had to get into the code to fix them. While doing this I started to remember that I actually quite like hacking on AROS and miss it. That and my brain seems ready for a challenge again.

Of course this time around, I'd like to avoid the frustrations that contributed to me quitting last time. So this is my plan:

  • I will only work on things that interest me
  • I will not work for money
  • I will not take on significant commitments (ie "sure, I can take a look at that bug" is ok, but "sure, I'll write you a browser" is not)
  • I will not get involved in any political stuff like arguments about project governance, goals (backward compatibility) or anything else

The last point is key. There was a few times previously that I had to do things the wrong way just so that backwards compatibility would be maintained, a goal that I never agreed with. This time, I won't be arguing about it, I'll just be doing what I want to do. Its a light fork, if you like.

I've got a new repository set up over at repo.or.cz. "cake" is what I'm calling my mini-project for now. I'll be committing everything I do there, as well as keeping the AROS mainline there (manually updated as necessary). I will commit things to the AROS Subversion repository as appropriate, but when I do something that causes significant breakage then it will live here. In true open source fashion, anyone who wants my stuff can get it from me and build their own, or if demand gets high, maybe I'll provide some builds or something. We'll see.

So here we go, the brave new world. I'm great at changing my mind, so we'll see how long this lasts :)

monday, 27 april 2009

posted at 08:23

It was my birthday last week, so on Saturday a bunch of friends and family joined me for lunch at a nice little café near my house. I had an awesome time and felt very special, yay :) The big surprise though was people's overwhelming generosity. Gub had let everyone know about my project and so I'm now sitting on a fat wad of cash to support it, more than I'm probably going to need to get everything to make this happen. So yeah, pretty amazed.

So yesterday I ordered the SCART-to-HDMI converter that is the critical piece in all these. I'm expecting it to arrive late this week or early next. While I'm waiting I need to make a list of parts I'll need to make the necessary cables and connectors. I'm thinking on Thursday I'll head to RS to get what I need. I'd usually go to Jaycar but they don't appear to have SCART connectors. On the other hand, they do have the cheapest HDMI cables I'm likely to get, so maybe I'll just order a single SCART connector online and then build a single adaptor or something. Haven't quite figured it all out yet.

sunday, 19 april 2009

posted at 17:21
tags:

I've booked a hard rubbish collection with the local council for next week, so I've spent the last few days piling stuff on the nature strip. Its been very purging, and the garage is looking great. As usual a few things have been "stolen" from the pile, which is fine with me - better someone gets some use out of it. What has me curious is why people have chosen the things they have. For example:

  • I put out the six pieces of a plastic kids cubby (four walls and two roof pieces) that broke in a storm. Someone has taken two walls and two roof pieces, but left the other two walls.
  • I put out a laser printer which as far as I know works fine except for needing toner. Fair enough to nab it, except that whoever took it left the paper tray.
  • I put out two computers. One was an old Pentium II in perfect working order (even has a disk and memory in it), but the case was in terrible condition. The other was an ancient HP Vectra that I gutted for parts such that all I threw is the box and motherbord. The first is still there, the second is gone.
  • I put out a box containing three broken CD drives, four broken floppy drives and a bunch of random cables. It got taken within half an hour of going out.
  • I put out a fully working G3 iMac (one of the original blue ones). It got taken, and in its place was a fully working G3 PowerMac (again, the blue one). That one I grabbed myself for contemplation and hacking.

So I'm not quite sure what people are thinking. I suppose if you can find a use for my junk then more power to you.

friday, 10 april 2009

posted at 00:18
tags:

She's here!

Penelope (Penny) Norris arrived just before 2pm Thursday by caesarian section. Compared to her sisters she's tiny at 3.4kg (7.5lbs), but she's still got the patented Norris large head. I've already poked fun at her with things like "orange on a toothpick" and "sputnik", but I encourage you to join in too.

Her head is cute actually. Its almost spherical with a squished old-man face right in the middle. She frowns a lot too so she really does look like a grumpy old man.

We're still trying to work out who she looks like. At first she appeared to be a clone of Beth (her 18-month-old sister), but having spent the day with her she really seems to be taking on a bit of a look of her own. So far I think she might look a bit like my eldest younger brother.

Anyway its late and I'm tired, so off to bed. Tomorrow I take Francesca in to meet her. Thats going to be insane.

thursday, 9 april 2009

posted at 06:41
tags:

If all goes according to plan then in just a few short hours I'll be the father of three girls. I'm actually rather nervous which seems strange considering I've done this twice before. If its the kind of thing you go in for then please send up a prayer for Gub, bub, the girls and the army of family looking after them for the next few days. I'll be online tonight with photos and the like.

wednesday, 1 april 2009

posted at 21:37

I had a bit of time to today to read about how to get older consoles to make nicer pictures on the big screen and found that things are much simpler than I thought and this shouldn't be that hard and quite a bit less expensive than expected.

There's two parts to it. Lets assume that we can get raw analog red, green and blue signals out of the console. From this we build a SCART cable and plug it into a SCART-to-HDMI converter. That should be all thats necessary, for about $150 plus whatever shipping from the UK costs for a small box. Unless I can find a local supplier, but it would probably cost about the same.

The tricky bit is getting RGB off the console. In PAL land, the SNES and the Gamecube have RGB right on the MultiAV connector. Its good to know its there, but its not particularly useful since the MultiAV connector isn't a standard plug I can get hold of, and hacking an extra one won't help since the cable doesn't expose all the pins.

For the N64 I'll have to grab the RGB lines directly from the video DAC. It ends up being about the same amount of work because I'll be exposing a new port on the back with some kind of common connector.

So it should actually be doable, assuming the converter works. Yeah, I'm not really nutting anything out, its all documented and tested already, I just have to put all the pieces together.

tuesday, 31 march 2009

posted at 22:16
  • mood: positive
  • music: regurgitator

Oh boy, long time. Probably about to get longer given that in just nine days, assuming all goes to plan, I'll be the proud father of three girls, up from the two I have today. So with that in mind I though it best to do the big dump list of everything that's been happening lately so I don't have it swimming around in my head too long.

Homewise:

  • As mentioned, I have a new kid arriving next week. I'm spending a great deal of time getting stuff done around the house in preparation for that and generally supporting my wife, who remains in good spirits despite spending being exhausted all day long and still somehow managing to take care of the other two, both very demanding in their own way.
  • The upcoming larger family has required upgrades, so we now have a larger car, larger couch and larger tv.
  • The family won't be getting any larger after this, courtesy of Dr. Walters. I can't recommend him enough for this type of thing; he was fast, non-threatening and completely transparent. The only downside of the whole experience is that I didn't get to be number 13000 - I had to settle for 12980. Ok, the only other downside is that I don't have access to the totally insanely awesome sleep drugs he uses. I was out chasing space shuttles and I liked it that way!
  • I've been seeing a psychologist to help with my brain problems. She's been awesome, though I'm not very good at doing my homework which I think may be annoying her a bit.

Workwise:

  • Work has been crazy, the same project I've bene working on for the last two years continuing to kill me.
  • But, there's been a big change in the last two weeks. In the hopes of making the project go faster, the entire project team has been whisked out from under me and reassigned to a new manager they've imported for just this purpose. At the time I had some pretty serious reservations about it all, but as me and my boss work through all the transition and handover stuff with them we're increasingly finding ourselves with plenty of time with which to contemplate all the work that we've wanted to do in the last couple of years but couldn't ever get priority for. Work is suddenly relaxing again, and in a little while might even be fun! Imagine that!

Techwise:

  • I've been hacking on OpenTTD a lot in the last couple of weeks, finding my way around the codebase by implementing a new kind of depot. Great fun, highly motivating.
  • Gub bought me a new DS to replace the one I destroyed, and I've been liking the new GTA game, so much so that I may actually buy it soon (yarr).
  • Given the aforementioned new tv, I've been looking for ways to hook up my older game consoles to it via something other than composite, which while not bad from the Gamecube is a complete disaster from the N64 with a tv that can see every flaw in the produced image. Whatever I do is going to end up being a big job but in the course of my searching for details I've found the magnificent GameSX.com and their RGB+Video forums. I'm slowly working my way through just about everything here, but its becoming increasingly obvious that I require a XRGB-3, but $600 is a bit out of my price range. Lots of study required to figure out what I can do myself, though I suspect I'll run into the same problem I always have with hardware hacking in that I don't have the equipment required to make it happen. Its fun to think about it at least!

So yeah, thats where I'm at. I guess you won't hear from me for a while again, except next week to post some pics. Lucky you!

wednesday, 28 january 2009

posted at 22:26

After my lack of motivation I've had a couple more interesting ideas and so I've started very slowly poking at them, being very careful to not overdo it in an attempt to avoid the burnout. So far it seems to be working!

The short is that my mate Sam and I have been pondering for a year or more the idea of building an arcade cabinet that runs emulators for various old systems (MAME and such). He's a high school woodworking teacher and cabinetmaker by trade so he's perfectly qualified to build the box. I know a thing or two about computers, so I can do that bit. The problem so far is that we've never really had any good place to build it - I have a garage but no tools, Sam had no space at all.

He's just recently moved house and is finishing getting his own garage all kitted out with workbenches and drop saws and drill presses and other things that scare me and he's now ready to build something, so we're taking another look at it. I've managed to scrounge enough parts to build a reasonably good rig, though the ridiculous weather is making me reluctant to go out to the shed to work on it. Arcade Gaming Australia have all the buttons, joysticks and other bits that we'll need. There's only one thing left - an awesome UI for choosing games and things. Yay software!

So tonight I've been sitting under the air conditioner fiddling with Clutter. Its a library for making fancy interfaces by using lots of 3D stuff under the hood. As far as I can tell the most well known example of the type of thing its for is Apple's Cover Flow. Just from playing with some of the samples I already have some idea of how I'd like a game selector to look, so I've started experimenting using the Perl bindings.

The basic idea is that you set up a bunch of actors, which are basic visual elements - some text or an image for example. You can specify various transformations for an actor, eg scaling, rotating, etc. After that, you place your actors somewhere on the stage, which is roughly analogous to a window.

Next is where I get a little confused, but not so much that I can't get something done. You setup a timeline, which has two paramaters - number of frames, and frames per second. You hook up an "alpha" to the timeline, which is a function that gets called every frame, and returns a number that I don't fully understand the purpose of yet. The number is used to drive "behaviours" attached to each actor, which makes them do something depending on the current distance through the timeline. A behaviour might be to move the actor around the stage, rotate it, or something more clever.

There's also an input layer, but I haven't really started looking at that yet.

So here's the fruits of my evening. It takes a random image and rolls it around a window.

#!/usr/bin/env perl

use 5.10.0;

use strict;
use warnings;

use Glib qw( :constants );
use Clutter qw( :init );

say "usage: roll image" and exit -1 if !@ARGV;

my $stage = Clutter::Stage->get_default;
$stage->set_color(Clutter::Color->parse("DarkSlateGray"));
$stage->signal_connect('key-press-event' => sub { Clutter->main_quit });
$stage->set_size(800, 600);

my $actor = Clutter::Texture->new($ARGV[0]);
$actor->set_anchor_point($actor->get_width / 2, $actor->get_height / 2);
$actor->set_position($stage->get_width / 2, $stage->get_height / 2);
$stage->add($actor);

my $timeline = Clutter::Timeline->new(100, 26);
$timeline->set(loop => TRUE);

my $alpha = Clutter::Alpha->new($timeline, sub {
    my ($alpha) = @_;
    return int($alpha->get_timeline->get_progress * Clutter::Alpha->MAX_ALPHA);
});

my $rotate = Clutter::Behaviour::Rotate->new($alpha, "z-axis", "cw", 0.0, 359.0);
$rotate->apply($actor);

my $path = Clutter::Behaviour::Path->new($alpha, [ $actor->get_width,                     $actor->get_height                      ],
                                                 [ $actor->get_width,                     $stage->get_height - $actor->get_height ],
                                                 [ $stage->get_width - $actor->get_width, $stage->get_height - $actor->get_height ],
                                                 [ $stage->get_width - $actor->get_width, $actor->get_height                      ],
                                                 [ $actor->get_width,                     $actor->get_height                      ]);
$path->apply($actor);

$timeline->start;

$stage->show;

Clutter->main;

Hard to show it here, but here you go:

Of course I have no idea if this is the "right" way to do it, but it seems to perform well enough so it will do for now. Next is to make a little photo thumbnail viewer, using the arrow keys to scroll through the photos and a little zooming magic.

tuesday, 20 january 2009

posted at 07:36
tags:

I haven't written any code for ages and ages. I'm having a lot of troubled getting motivated, and although I've had a few idea I haven't really come across anything sufficiently challenging or world-changing to really get excited about it. Some ideas I've had that haven't got anywhere, in no particular order:

  • A group chat client (IRC, Jabber MUC, etc) that allows images, videos, etc from the web to be embedded directly in it so that when we share links I don't have to click everything. I started prototyping this in Perl using WebKitGtk, but it doesn't have API that allows a page to be added to incrementally, and I don't really want to get back into hacking on WebKit right now.

  • A version of Settlers of Catan with a really sweet AJAX/DHTML/Web 2.0/buzzwordish interface. Currently the options for playing Settlers on a computer appear to on your local machine with Pioneers or on the web through Java applets. I got as far as thinking about the AI and poking at the Pioneers code a bit before realising that the only interesting bit is too hard for me which means borrowing the AI code and then writing the web interface, which is totally boring.

  • A browser-based sprite engine. THe idea is you'd throw in a bunch of images, specify coordinates and any filters/transformations you might want, and it takes care of their display. Again though, its hacking browser stuff which I'm pretty much over at this point in my career.

  • A web-based forum that doesn't suck. I'm quite over every bit of web software requiring a database and having insanely complicated interfaces. I had a thought of some kind of cross between Digg, the Joel On Software forums (design notes) and an image board. Basically no mad hierarchy of forums and posts, no threads, no userpic/sig/meta clutter on every post, and no login requirement. Internally its very much like blosxom, just a simple one-file script that you can drop into your cgi-bin and it just goes, and that you can extend in any way you like with plugins. The whole thing would be backed by git and just be sweetness. I made a good bit of progress on this but at the end of the day its basically a repeat of the code I've just written to get this blog updated, and its for web forums which I hate and avoid as much as I can anyway.

  • A database diff tool for Lotus Notes databases. This is kind of interesting, and it would help a lot at work and dodge the need for us to buy any more stupidly expensive licenses for a commercial product, but the Notes API really really sucks and I haven't been able to get a working development environment working on Debian yet, which means I'm going to have to use a Redhat VM if I want to make this work. That amount of effort for what is really part of my job in my spare time is just too much.

So there you go. A few reasonably good ideas, but no motivation. As a result I've been killing time playing games, but that's starting to wear a bit thin. I'm not sure what to do right now, because I really feel like I want to write something. Motivation came easily when I was working on jabberd2 - we were changing the world! It was easy for AROS because I'd never worked on an so many different facets of an operating system before and everywhere I turned there was something new to be done. None of the above so far fit into one of those categories. Every existing project I've considered so far doesn't seem to be either groundbreaking enough or have enough for me to learn to make it worth the effort. I'm not sure what to do now.

Its likely I'm growing old and curmudgeonly. I hope I can get interested soon, if for no other reason than my bus trips are getting boring.

monday, 19 january 2009

posted at 22:12
tags:

This is a rant. You're not expected to agree or care.

I bought Left 4 Dead a few weeks ago. Its a good game. Not the best game ever, but enjoyable enough. Nice idea, fun to play, etc. But, as usual, its being ruined by the internet.

I was in the mood for a game tonight so I hopped on and started looking around. I had no success finding anything interesting, just frustration. The first game had four players that based on their names were clearly affiliated with the same clan or whatever. Two of them joined survivors, the other two infected. I was survivors, and me and the other random got jumped within about a minute in while they stood idly by and did nothing. I quit shortly after.

Second game saw me on the infected side. I make some good attacks and finished with the most points, which made me happy. On our turn as survivors they somehow decided without speaking to go in another direction to what's done most commonly. Because I didn't know about it, I got seperated and they had to come back for me. I asked them to let me know if they're planning to run off with out me next time, and they voted to kick me out of the game.

By this time I'm pretty annoyed, but I decided to give it another try. The other players were at least communicative, so I knew that they wanted to rush, and I was ready for that. There's still the number one rule of this game to be remembered though - stick together. The front two rushed a little too fast through the apartments, while me and another got pinned in a place where we couldn't be saved. Leaving us behind was apparently our fault, and we got an earful for it. I just couldn't bothered after this and quit the game.

So here's a good game that I like playing and while I'm not the greatest player in the world I don't think I'm a complete passenger either. So why is so difficult to find a good game? Am I destined to play by myself forever?

So yeah, people have ruined the internet. I'm not telling you anything new. But still, sigh.

sunday, 11 january 2009

posted at 10:50
tags:

I just spammed Planet Linux Australia and probably anyone else who subscribes to my feed with all the posts I just brought back to life. I'm sorry :(

sunday, 11 january 2009

posted at 09:04
tags:

I've just added all the posts from my old Wordpress blog to this. Its taken a few days since I had to write a little HTML-to-Markdown converter, and then manually clean a bunch of problems, but its here now. There's probably still a bit of brokenness, but it shouldn't be too much now. No comments yet, but I'll bring those across once I write some code to hit the disqus API properly.

wednesday, 7 january 2009

posted at 08:02
tags:

I bought a pile of games through Steam a few weeks ago and as part of the deal I was given a guest pass for each of Counter-Strike: Source and Team Fortress 2. I don't need them but if I don't give them away soon they'll expire. If you haven't played one or both of these games and you'd like to give them a try, drop me an email and I'll send them your way.

wednesday, 31 december 2008

posted at 10:32
tags:

I've had a nice week. Christmas things were awesome, lots of fun and I scored very well - my own copy of Settlers of Catan and the Cities and Knights expansion, some tshirts, IT Crowd and Frontline DVDs and some DS accessories. People are learning my tastes well, it seems :)

Sadly I've managed to break my DS. I put some screen protectors on it a while ago but they've got pretty scratched, so I decided to replace them. When I lifted the one off the bottom screen I managed to bring the touch lens with it, so even though I've put it back on the touch screen doesn't work. I can still play most games and I wasn't playing it much at the moment anyway, so getting it fixed isn't desperately urgent, but its a bit annoying. I'm still looking into repairs. I can get a brand new LCD+touch screen element for about $70, but I'd have to install it myself and I don't know if I really need all that since the LCD works. I'd need to get special screwdrivers too. I'd rather just get someone to fix it for me but since i can get a new one for $190 I'm not sure how much effort it is. I'll probably leave it until it starts to become a problem.

Sunday was game time, mostly Left 4 Dead, Counterstrike and Day of Defeat. We also played Ricochet which is really quite cool if a little simple. All good fun, as usual. Tonight is more games, this time of the board game variety. In reality it'll just be Settlers since everyone knows how to play already.

I watched all of the Boxing Day Test, which was an excellent game even if we did lose. I predicted at the end of day two that we'd lose even though we were in a position where a loss was close to impossible, and so it came to pass. Go seer. Apparently on day two South Africa were at $13 to win; shame I didn't get some of that action.

In between all this I've done the gardening, got hold of a really awesome wooden cubby that I'll put together for the girls in the next couple of days, done a pile of chores, read lots and generally had a nice relaxing time. Gotta go back to work on Monday but I think I'll be pretty flaked by then which is exactly what I needed.

wednesday, 24 december 2008

posted at 15:30
tags:

For the first time in a few years I'm actually excited about Christmas, for several reasons:

  • The girls (Francesca in particular) are finally old enough to really get into the day and I love watching them,
  • I've had the money and the motivation to find what I think are some really awesome and quite thoughtful gifts for my lady
  • I've heard I'm getting some awesome stuff and I'm excited to find out what
  • In the last year I've had many opportunities to get to know my family again and I love spending time with them and will get to do so tomorrow
  • The food I've seen prepared so far is looking damn nice

Hope you have a good day too. On, and if for whatever reason you don't do Christmas, have a good day anyway :)

monday, 22 december 2008

posted at 14:48

Well here it is, all new prettiness that I've been planning and threatening for a little while now. I've been almost about to write something for a few months but I was kinda sick of the old orange and green, and I knew I didn't want Wordpress anymore so I didn't really want to do much until I'd figured out what it would look like and all that.

The whole thing is based on blosxom, a pile of custom plugins, a sweet preprocessor and managed through git. Markdown makes it so I can just type normally and get all my formatting for free. I'm trying disqus for the comments, since I really wanted to get away from needing to run a database and the have anonymous posting which is all I really need.

I bought art from iStockphoto (that cute monster eating the rocket) because I couldn't find what for free, and I'm actually really happy with it. As usual daniel figured out all the tricky CSS bits because I really really suck at it, so its a big happy team effort.

There's not really much left to do apart from some style tweaks and sorting out exactly what I want in that right bar. Its all nice and good and happy, I'm quite pleased :)

thursday, 17 july 2008

posted at 09:07

So, long time huh. Time to write something, I guess.

Its hard to write again after a break, not least because so much as happened in the last three weeks that unless I write for another three weeks I'm never going to cover it all. So right now I'm not going to try. i'll just write about what I've been working on, get caught up to some degree, and maybe come back to any other interesting stuff down the track. So here we go!

I've long threatened picking up a DS homebrew kit and doing something interesting with it. I finally snagged a M3 DS Real from the Monash post office, of all places. Its a cute little thing. I got a 4GB microSD card with it, so I should have enough grunt to do anything I could ever hope to do with it. The next thing is just to work out what that is. I have an idea of course, one probably rather predictable if you know me.

I have an interest bordering on an infatuation with the game Frontier: First Encounters; specifically the JJFFE variant. Its a great game, and the effort and style that went into making an old Windows game come to life on more modern systems still totally imprresses me. John Jordan took the game binary and run it through a custom disassembler that produced an assembler source file that would compile on whatever platform he chose. His disassembler also identified operating system calls for graphics, sound, input and filesystem access, which he then abstracted and reimplemented for DirectX, SDL, and whatever else. So now the game runs on (i386) Linux without issue as well as on modern Windows systems. He even fixed a heap of bugs. Thats great!

I've messed with this code at various times. I implemented the OS abstraction for AROS a couple of years before I got involved in AROS proper. (That work later led to me working on some graphics subsystem speedups and a SDL graphics driver for AROS). I've also long dreamed of somehow converting it to pure C so that it could be hacked on properly. I've dabbled with this at various times, both using automatic and manual techniques, but haven't really got very far mostly because of because of the limited success others have had with the general problem of decompiling assembly back to C.

So anyway, I got a DS kit, and of course started to think about how cool it would be to play FFE on it, and also about how to take advantage of the dual screen and touch screen. I've been dreaming of interesting UI changes that would make the game work much better on the DS, but of course first I have to get the game working there. That is not a trivial task, and has been the subject of inquiry for the last couple of weeks.

The problem is obvious. The DS has a pair of ARM CPUs. The JJFFE source is in i386 assembly. So there are really only two options - some sort of emulation, or converting the code to a higher-level language and then recompiling it for ARM.

Emulation would only really require a processor emulator for the game core since all the systems interaction could be done in C, and perhaps would have been the easier option, it doesn't help much with my eventual goal (or "hope", rather) of making significant modifications to the code to support the DS hardware properly. So instead, I've again returned to converting the assembler code to C.

As mentioned above though, this is something I'd pretty much given up on as being too difficult. I thought about it for a while and realised as a first cut, I don't need to convert it back to anything resembling its original C. Instead, what if I was write an assmbler that produced C code that implemented the processor instructions, rather than producing raw machine code. The result would look much like C - we'd essentially have a kind of static CPU emulator built into the program code itself, with global variables representing the processor stack and registers. But, it could be recompiled for another CPU, which is the point of the exercise.

This seemed like a reasonable approach, but writing an assembler is insanely complicated. After attempting a hacky parser in Perl, I decided that nothing short of a full assembler would be able to do the job. NASM proved too complicated to penetrate, but then I found YASM, which is a highly modular clone of NASM.

So I took YASM and started writing a custom object format, one that would output C code. However, after experimenting to gain some experience with the interface, I realised that I was just getting the raw machine code and then converting it to C with a little bit of symbol table gymnastics to identify and produce simple unadorned C functions. This reminded me of a project I worked on for a while in 2004 that turns out to be much better suited. That project is a custom disassembler/decompiler of the same kind of was used to produce JJFFE in the first place! Let me explain.

Another old game that I love is Transport Tycoon (actually its sequel, Transport Tycoon Deluxe). At the time, it was Windows-only. There was a project called TTDPatch which would take the server binary and hook all sorts of stuff into it to add new features and fix bugs and whatever else. This worked well, but it was still Windows-only. Wine did a reasonable job with it, but it was still less than ideal. So I decided that I'd give it the same treatment as FFE got, and produce a disassembly and system abstraction that could be run anywhere.

I spent a lot of time studying JJFFE and Jordan's decompiler and even had a series of email discussions with him to get a feel for just how to do this. After several weeks I managed to get my decompiler to the stage where it produced a viable disassembly and C file of OS call stubs. But, as fate would have it, the day it compiled and ran for the first time (segfaulting of course, as I hadn't yet learnt about how Windows uses the %fs segment register), OpenTTD was announced, which was essentially a conversion of the original game back to C. My decompiler had no further reason to exist, and so I abandoned it.

The way it worked was pretty straightforward. It implemented what is essentially a Portable Executable (ie the Windows binary format, like ELF for Unix) loader with calls into the NASM disassembler to analyse the code and produce a disassembly and a stub file. Simplified, it does the following:

  • Inspect the program binary and find the code, data, bss, import and relocation segments.
  • Load the program binary into RAM.
  • Apply the relocations to produce a complete program image, additionally creating a "label" for each relocation.
  • Inspect the import section to build a list of all the external libraries and functions that the program wants.
  • Disassemble the code segments to find all the relocation labels that are in use and what they point to. From the instruction type, we can determine whether the target is code, data, bss (ie unitialised data), a vector table, etc.
  • Disassemble from each code label to the next to find any other labels missed in the first disassmble run. That might have happened, for example, if there were "garbage" bytes in between the end of one function and the start of another that caused the wrong disassembly to be produced crossing the function boundary.
  • Do this dissambly over and over until no new labels are produced.
  • Run through any relocation labels that have not been processed yet, and make them data labels. This works off the assumption that if the linker thought it important enough to include a relocation, we should probably include whatever that relocation points to in the output, even though we couldn't actually find it in the code.
  • Output EXTERN declarations for each external function name.
  • Disassemble from each code label again, this time producing actual output. Any memory references in the output (ie things beginning with 0x) get replaced with their corresponding label, if there is one.
  • "Disassemble" from each data label, producing raw byte output (ie db or dd). For any data that was referenced via a relocation, produce the corresponding label.
  • "Disassemble" from each bss label, producing a memory reservation in the output (ie `resb

Theoretically, the resulting output from that is just about usable. There's a bit of manual cleanup that has to happen (like the mentioned deal with the %fs register), but this output should at least compile and link, which is most of the fun. Theoretically you implement the stubs for your platform and you're away.

So, back to our original problem of producing C from a binary. I realised that in this code, I'd already done most of what I needed. I know where all the function boundaries, jump points, vector tables and data are. All that needs to happen is instead of producing formatted assembly, all I need to do is produce some equivalent bit of C code. There's some complication of course, like the fact that sometimes several instructions map to a single C construct (like if or while) but I figure I'm most of the way there.

So right now, I'm working on cleaning up and generalise the decompiler, which I've christened opsoup. It was pretty heavily tied to the PE format before, which of course is no good for me - I need ELF. I'm not bothering with trying to keep it compatible with PE at this point, as I have a pretty specific purpose. I can always add that back in later if I ever need it.

I have absolutely no idea how this is going to go, but its fun finding out. In adition to playing my game, I'm hoping that having the code in C, even horrible not-quite-C, will make it much easier to gradually convert some of the code in actual nice C (due to the availability of things like gdb and printf). I don't expect it to happen fast, but I've been hacking at this code on-and-off for the last five years, so messing with it for another five doesn't really concern me that much.

thursday, 19 june 2008

posted at 00:49
tags:
  • mood: modular

A few random things I've been working on lately:

  • Started migrating this blog into ikiwiki. Stuck on a problem with file create/modify times not being preserved, which makes complete sense but is annoying. I think a plugin is required, much like one I wrote for blosxom once upon a time.

  • Made progress with Test::MockTerm. The terminal itself works, with open() and POSIX::isatty() being overidden correctly. I'm currently planning the interface for sending and handling control characters (and from there emulating Term::ReadKey). Its hairy.

  • Made a new release of XML::Quick after a couple of years. The test suite now runs (the were bugs in the suite itself) and an ancient bug was taken care of to boot.

  • Dug out XML::Spice, something I started a couple of years ago as an answer to Python's stan XML generation library. It already has some funky magic in place that lets you call the generator function to create some piece of XML which totally complete and usable, but then call the generator again with that returned chunk as one of the arguments to have it embed that chunk into another chunk of XML. At this point both are valid, but it can do some funky stuff like move and reprefix namespace declarations to make the result more concise without changing the semantics. It does this without having to reparse the original XML. You have to read the tests to find out more. In any case, its not finished yet and perhaps never will be since I'm long over XML.

  • Also dug out HTML::Calendar::Render (no link yet), which you give a bunch of events and it creates a calendar using HTML tables. This was done for work, at a time when we temporarily needed an alternative view to our corporate calendar system. Producing calendars in HTML is nothing new, but this is prettier than any of others I've seen on CPAN. It works really hard to produce stuff that looks like your calendar in something like Outlook, where overlapping events appear next to each other at half-width. This one I'm kinda interested in getting into some kind of release quality. The plan is to split out the tree generator from the renderer itself, so you can add modules that might render it with or without HTML tables, in PDF, or via cairo. Coming soon I guess.

wednesday, 4 june 2008

posted at 23:33
tags:

I've been getting into Perl again in a big way. I'd been getting in the mood for code again, and started looking through my old stuff to see if anything tickled my fancy. I found some Perl stuff, which I've been chucking in git, and was poking about it, when a question about IO::Prompt came up. Since I'm theoretically responsible for that module, it seemed like a good place to kick off.

I started work again on Test::MockTerm, this time making a pure-Perl version rather than relying on IO::Pty, which had a few issues that made it unsuitable for what I needed. As mentioned in that old post of mine though, without real terminal handles I have no way to test the code paths that expect them.

During my break, Perl 5.10.0 was released, but the hoped for patch that would allow me to override the -t file test operator never made the cut, so I'm still stuck without a solution. The short-term workaround will therefore be to modify IO::Prompt to use POSIX::isatty, which does essentially the same thing. Hopefully it'll work on Windows when that time comes.

Longer term, there needs to be a way to make -t overridable in some way, so I started poking at the code a bit, and thought about extending tied handles to do the work.

For the uninitiated, Perl has a facility called "tying" where you can essentially tuck an object reference into a variable. When operations are performed on that variable (like storing a value in it), Perl arranges to call methods on the object to do the work. perltie explains the details.

You can tie a filehandle to an object in the same way, so its the ideal way to do what I want. Except that in the BUGS section, we see:

Tied filehandles are still incomplete. sysopen(), truncate(), flock(), fcntl(), stat() and -X can't currently be trapped.

-X is the general way to refer to the file test operators. Damn.

Tonight I sent off a patch that implements stat() and -X for tied filehandles.

It was quite a challenge to write, but very fulfilling. One of the really interesting things about hacking on Perl is that the culture of writing comprehensive tests and documentation is deeply entrenched, so no matter what you do, you end up with high-quality work at the end of it because its provably correct.

Unfortunately even if it is accepted it seems unlikely to me that the patch will appear anytime soon. I don't know what the policy is on new features on the current maintenance branch, but I wouldn't be surprised if it was slated for inclusion in 5.12. It took over three years for 5.10 to appear after 5.8, so I won't have this code available in a stable Perl for quite a while yet, but at least I have something to point to.

Next is to get Test::MockTerm finished, which I'll get back into tomorrow.

friday, 23 may 2008

posted at 12:10
tags:

I'm in the process of putting a heap of code thats just sitting around on my laptop into git repositories. To make my life easier I've moved all my AROS stuff into a subdirectory. So if you're looking for one of the AROS repositories or you've cloned from me, you'll need to change paths. As usual, cgit lights the way.

sunday, 18 may 2008

posted at 22:59

Lately:

  • Played a lot of Morrowind. Maybe too much, enough to cause the people not interested in it to not even talk to me anymore, while the people who are interested almost come to blows while debating minutiae of the details that drive the thing. Got down and dirty with the construction kit, making a fancy unlimited storage mod. Its all fun.
  • Got interested again in the details of RPGs, MUDs, etc. Still just idle pondering at the moment.
  • Thinking about Javascript again, idly considering resurrecting amber (the Javascript host environment that I started a few years ago).
  • After some Debian upgrade issues, switched from XFCE to awesome as my window manager. Still getting used to it, but I know its where I want to be given that I was a long time Ion user before tuomov went insane.
  • Read Chuck Norris' autobiography. The subject matter is interesting, though the writing isn't really anything fantastic. Its an entertaining enough read but doesn't come highly recommended, do what you will that. I am vaguely inspired to trace my family tree a bit after looking at one of the photos in the book and noticing that Chuck in his early years bears some striking similarities to my dad.
  • Messed with getting Debian on the OLPC. So far I have a minimal Debian system booting from a USB stick. As I write this post I'm working to get it onto the internal flash, and we'll go from there. From poking around the internets it looks like nobody has done exactly what I'm trying to do, which is to remove all traces of the default OS. All the instructions I can find for putting alternate systems on the machine is done either having the alternative on a USB stick or SD card, or using the "alternate OS" feature of the default system. Bugger that - if I want the original stuff back at any time, I can easily grab an image and reflash it. My aim is to have the OS proper on the internal flash with /home on a SD card. Once I get it all working I want to try putting MAME on it, using the game buttons with the LCD folded back to produce a kind of portable arcade gaming pad. Its something different at least.
  • Trying to figure out how to be a good boss to the nine thousand new staff we seem to have picked up in the last few months, not to mention the looking after the old guard too. Its really hard.
  • Been looking at ikiwiki and thinking about moving this whole site over to it. I still want the blog, but I miss having the wiki. Rolling it all in together, backed by git, sounds like a pretty sweet ride.

There's probably some other stuff too, but I think thats all the major stuff. See, making great use of all my spare time :)

monday, 28 april 2008

posted at 22:17

There's no easy or amusing way to say it, so I'm just going to say it. My involvement in AROS, including Traveller and the nightly builds, ends right now.

Over the last few weeks I've been doing a few different things. I played lots of Morrowind, started work on a couple of brand new projects, played lots of the new Advance Wars game that I got for my birthday, read the new Ben Elton book, and a few other things. I've enjoyed every part of it. I've been doing lots of different things, stretching my brain in different ways, and not been beholden to anyone. Since I'm happier, work has been much better, home has been much happier, everything just seems good.

The whole time though there's been a tiny nagging voice in the back of my head. Thats the one that has been telling me that I need to get on with Traveller. Only a couple of months to go. I hate that voice. I've tried a number of times to get into it, but I've only added about twenty lines of code to the loader in that time.

The fact is that I'm just over it all. Every part of AROS that I was interested in I've done enough work on to learn as much as I'm interested in. I wrote a filesystem. I wrote a graphics driver. I ported some minor apps. I hacked on some libraries. Its the same for Traveller. I got it to a point where you could browse the web. Everything else is just a bonus - in these areas, there's not really much left to take my interest.

I've been over this before. This is a major part of my reason for planning to leave after Traveller. But I really started thinking about why its so difficult for me to get motivated. The question I eventually got to was "would I be trying to finish this if there wasn't a nice prize in it for me?". And the only answer I had to that was "no".

That was a rather enlightening moment. I'm a little bit ashamed of myself actually, but not surprised. I've known since forever that money is not really a motivator for me, it never has been. I think I just got a bit dazzled by the possibilities; large amounts of spare cash don't come my way too often and there's at least one neat gadget that I've been hankering for.

So all in all, I have no compelling reason to continue. I realise I've made commitments, and I hate breaking them, but I've made other commitments in my work and personal lives, and I can't do them all, so I have to choose. Once I really looked at it seemed to be a fairly straightforward choice.

It shouldn't take long to remove myself. I've already managed to offload nightly build duties, as there were some issues and recent updates that I've been rather tardy in sorting out and so someone else offered to take the build on. I'll email TeamAROS shortly to let them know that I'm ditching the bounty. I guess I'll spend a little time during the week responding to email and that should about take care of it.

For anyone who wants to take on WebKit and make a browser, feel free to take the code I've already done. Its all available via my git repository under free licenses. Do contact me if you need help; while I'm not working on stuff and won't be paying a lot of attention the current goings on, I'm quite happy to offer support and advice on specific issues.

Finally, thanks to all the nice people in and around the AROS community. I've had a great time getting to know you and working with you. I've no doubt that we'll see each other around the internets from time to time, and I'll drop into #aros when I can too.

This blog isn't closing up shop, of course. Once I'm back from playing games and reading I'll likely be back writing about whatever I end up working on next. Current interests are DS hacking, binary decompilation and RPGs. By the time I write something they may not be interesting anymore in which case you'll get to read about something else :)

tuesday, 15 april 2008

posted at 23:17
tags:

Late last year I wandered into my local optometrist to have him check out the weird dots I could see sometimes in bright light. He told me that it was entirely normal and nothing to worry about, but in the course of testing it turns out that my eyesight is not quite right, though not so terrible that I would have known.

But I saw the crisp edges on the text used for the tests, and that memory has haunted me since. Every time I've read a road sign, book, whatever, I've been acutely aware of how blurry the edges are. So a couple of weeks ago I chose some frames that I liked and handed over the cash needed to turn them into proper spectacles.

I picked them up today, and I'm having a little trouble getting used to them, but I'm stunned at just how crisp the text on my screen is right now. It had never occurred to me that antialiased fonts might not have to look blurry around the edges. Everything just looks so unbelievably crispy. While playing Morrowind tonight, I could see the seams where polygons of different texture quality joined. Its amazing.

I'll try and them out at work tomorrow, of course. I'm really interested to see if they help me feel less tired by the end of the day.

sunday, 13 april 2008

posted at 22:34

You've probably noticed that I've been fairly quiet for the last couple of weeks. Its partly related to what I wrote about last time, but not entirely. I've actually been doing quite a bit of hacking on a new project that I'm quite excited about, but I'm putting on hold for a little while so I can get back to Traveller. I'll write more about that one when I get back to it.

Other reasons are that work has been absolutely insane, so I haven't been able to send mail or IRC much during that time, and when I get home I'm tired enough that if I do want to sit at the computer I'm either playing a game or just not interested in talking to anyone.

Fortunately a big part of the last couple of weeks at work has been getting some new staff acquainted with the place and offloading some work, so next week should be much more relaxed. I still may not blog much, and I probably won't be on IRC, but I'm responding to email again at least.

My head seems to be in better shape, though I'm not taking anything for granted. Generally I found that I wasn't even having to try to enjoy myself, but on the other hand I had a couple of days like yesterday where my daughter was probably not aware just how close to death she came. So I'm hardly cured, but at least I've been reminded of what it can be like. I had a good chat with my minister who had some good advice, and I'll see my doctor this week, and I'm feeling pretty positive about the whole mess, so its good!

Thanks to everyone for your support and advice after my last post. Lots of people have told me their stories which I've really appreciated hearing. If you've sent me email and I haven't got back to you, please be patient - I will get there.

So starting tomorrow, on my bus trip, I'll be back working on the new shared object loader. I'm really going to push hard to get motivated and have something to show by the end of the week, because this bit is rather starting to hang over me. I really want to get back to the eyecandy :)

sunday, 30 march 2008

posted at 20:57
tags:

Hi, I'm back.

It would appear that there's a problem with the old grey matter. This is not news to me, I've suspected as much for a while now. Its only just now got to the point where I've been able to admit that whatever is up is not something I can fix on my own, and so the time has come to enlist some help and to make a few changes so I don't completely explode.

"Depression" would probably be the proper term to use here, except that there's about as many varieties of depression as there are sufferers, and since I haven't actually seen a medical professional yet it probably wouldn't be prudent to add labels just yet. Besides which, I don't really care what you want to call it. I'm more interested in fixing it.

Since Beth was born (six months ago) I've found the various pressures on me increasingly difficult to cope with. There's the obvious work involved with a new baby, wanting to make sure that she has everything she needs all the time and also making sure that her mother, my wife, have all the time and space they need to concentrate on the job at hand.

Then there's the existing kid, who at three is fiercely independent and doesn't know when to shut the hell up. Don't get me wrong, she is funny, kind, generous and compassionate, and I know her determination will serve her wonderfully well in the future and so I wouldn't change a single thing about her. But some days its all I can do to not lock her up under the house.

Then there's work. We've spent the last two years working on replacing everything that I've spent the previous six years building, and although I understand (and generally agree) with the rationale, its hard to let go. After so long its hard to find motivation to continue working on it. More recently, I've accepted a promotion to a team leader position, so I've moved away from the metal a little and am now responsible for the general well being of a group of people. This is unfamiliar territory for me and I'm eager to learn and do a good job, but that of course means additional pressure.

Next up is my personal time, those precious two-three hours of bus travel plus another hour or so in the evening. That time has been spent on AROS for just over a year. I've written at length about my increasing frustrations with the project, which has greatly reduced my interest and motivation, yet at the same time I'm still committed to a major project within it. So I have a lot of pressure and not a lot of fulfillment coming from that quarter.

I do all of this stuff without a great deal of support or guidance. I have friends, of course, and my wife is the most awesome person of all, whom I could not function without, but what I really need is an older bloke who has already done the family/work/personal life juggling dance and come out the other side. Ideally this would be my father, but for various reasons that I'm not willing to talk about right now he's unavailable, so I'm feeling a little stranded. My walk with God is in a pretty sorry state as well, so there's not a lot of help coming from there either.

I thought I could fix this myself. All I needed was more sleep, or to better manage my time, or to reduce the number of tasks I'd taken on, or to spend my time at home with my kids doing meaningful activities, etc. These are all things I've tried to change, with varying degrees of success, but it hasn't been enough. Not least of the problems is that in many cases they conflict with each other.

In the lead up to Easter I carried a dark cloud around with me, until on Sunday that I finally broke down. I hadn't slept much overnight as Beth had a difficult time, so I was tired. I screamed at Francesca a couple of times, both of which were very much overreactions. Once the girls were in bed at lunchtime I ended up talking to Gub in tears, telling her this whole story. Of course she knew, having had some experience with this sort of thing herself, as well as having lived with me. We talked about it more while on holiday and I finally started to get a little hope as we figured out a plan.

Thursday will be a big day for me. I'm going to take the day off and visit my doctor, looking for one or more of drugs or a recommendation of a counsellor who can help me talk some of this stuff through. Next after that is a visit to my minister for a chat. I want to talk to him about finding someone who would be willing to act in some sort of mentoring role for me, someone outside of my life who will take an interest in it and be available if I'm in an explosive mood. I also want to discuss practical ways to get my ailing faith back on track. All these things are intertwined.

I have to refocus my home life and remember what it is that I like about it. The last week away has been good. Being on holiday with nothing that has to be done gave me the opportunity to play with my daughters without feeling guilty about all the other ways I could be spending my time. This weekend back at home has been interesting, as I've tried to continue that without feeling the pressure from various chores around the house. I'm doing all right so far.

Finally on the personal time, some stuff just has to move. I talked about whether or not I'd fork AROS after WebKit is done, or just leave entirely. I think the answer is pretty clear. Trying to lead a major project at this point in time would be a very bad thing for my own sanity. I've learnt enough about operating systems while working on AROS to make it not particularly interesting anymore. I will get Traveller out because I've made a committment and such things are important to me, but after that my time with AROS will be over. I'll have more to say about plans for Traveller this week, and probably more to say about leaving AROS as the time approaches.

I have a vision for what I want my life to look like. I want to enjoy every part of it, and not feel like some parts are more important than others, but simply be happy with and accept the balance I've made because I know I've done it right. I don't want to be wishing that everything and everyone would just disappear so that I could sleep and not have to think about them. This is my life. I made the choices that I have got me here. I'm sure I made some mistakes, but the big things, the things that form my identity - husband, father, programmer, child of God - are there because I deliberately chose them. I want to want to be those things.

I don't know if I'll write any more here about this as I progress. I thought I should write something for anyone wondering about my standoffish attitude and general flakiness in the last couple of months. Don't feel like you have to tread lightly around this, I'm still the same person. If you don't care or aren't comfortable with it, then say nothing. If you're interested or curious about everything, ask away.

monday, 17 march 2008

posted at 19:20
tags:

I was working on the loader and thinking about blogging about it, and realised I needed some diagrams to adequately describe what I was doing. I've used various diagram software in the past, and it all sucks. I'm good at doing ASCII diagrams, but they look crap. So I spent a few days working on a diagram parser with a view to making it drive a cairo-based renderer of some sort.

That was fun and interesting project for a while until it started to need some actual theory behind it. Daniel and I discussed it a bit and came up with some interesting ideas about how it might be made to work, but it suddenly became an effort so I decided I couldn't be bothered with it for the moment.

This morning I decided to have a bit more of a go at the loader, but found I really couldn't get into it. It was much more fulfilling to continue messing with Morrowind, a game that I've been trying out this week (mostly because my laptop doesn't have the grunt to run Oblivion).

With a holiday coming up next week over Easter, I think I know what's happened. My brain is tired, and it sees a break coming, and has shut down. So I'm not going to push it. I'm going to spend this week playing my game, next week holidaying without a laptop at all (though I may take the DS), and generally taking a break.

If I do it right, I should come back refreshed and ready to work on actual things again :)

sunday, 9 march 2008

posted at 09:24
tags:

I started writing a long post about what I'm working on right now, but its really quite disjointed because I realised I don't actually have a point to make. So here's the short version of what I've been doing this week.

To be really useful, WebKit needs to be a shared library. On AROS, we can't support this in the normal way because of issues with global variables. The solution involves large-scale changes to the program loader and execution code.

AROS executables are actually ELF relocatable objects rather than executable objects. This is done so we can relocate programs on the fly without needing a MMU. To implement ELF shared libraries properly though, we need the extra information provided by ELF executables as they contain (among other things) the dependency list.

What I'm doing is to make AROS executables be ELF shared objects, containing both the relocation information and the dependency list, as well as other stuff. This requires a new loader for this object type, but I'm taking the opportunity to merge the existing ELF loaders since there's a lot of overlap of functionality.

Once shared object "executables" are available, I can begin implementing the library side of things. These are essentially the same thing, except that they will be position-independent, and so the loader will have to deal with setting up the GOT and PLT. The tricky bit arranging for each instance of the library to find its GOT. I'm still wrapping my head around that.

Once thats done, we'll be able to Unix-style .so libraries in addition to our standard ones. Not long after, we'll have a properly-sharable WebKit.zcc, with pretty things like libcairo.so and so forth.

I'll post more as I have time, progress and proper brain to describe it.

sunday, 2 march 2008

posted at 12:21

I have a little treat for the adventurous today.

AROSWebKit-preview-r30597-20080302.zip [8.1M]

Its mostly unusable, but many many people have requested a demo. Its still quite difficult to build it from the source, so here it is.

This will crash your system. No support of any kind is offered, but feedback is welcome. Send some mail or nab me in IRC :)

monday, 25 february 2008

posted at 15:34
A week later:

The major new things compared to my last post are the addition of the page title (and progress bar), the URL entry bar, and scrollbars. The last one is the thing thats been killing me for the last week, and I'm happy to finally have it over and done with.

What you don't see is that most of the details of integrating WebKit with Zune so that it can request and control native UI widgets. At its core, WebKit is a layout engine. It takes a UI description (in the form of HTML, CSS, etc), creates a bunch of objects, positions them relative to each other and then draws them. Sometimes (eg for a HTML form) rather than handling an object internally, it instead asks the host UI system to handle creating and drawing the object instead. When it does this, however, it expects to have full control over where the object is placed.

Zune allows a custom class to provide a function that will be used to position the widgets rather than using its own internal algorithms. This I have written. All it does is loops over the list of native widgets, asked WebKit what their dimensions are, and then tells Zune how it should draw them. Its the easy bit in all of this.

A typical Zune object is rendered in three parts. Upon receiving a request to render an object, Zune first asks the object about its dimensions, and receives back the minimum and maximum possible sizes it can use, and its ideal size. The object's parent object sets an appropriate size within the ranges and positions it in relation to itself, and then asks the object to do the same for its children, if it has any (most simple widgets do not). Finally, once the object knows its position and everything else is done, it is asked to draw itself in that space. This rendering process happens based on some external trigger, such as the window being opened or resized.

The complication arises from the order that things are done in this process, and when the process is triggered. Once its size is determined, a Zune object is asked to layout its children, if it has any, via MUIM_Layout. Once done, MUIM_Show is called to tell the object it is about to be displayed. Finally MUIM_Draw is called and the object is drawn.

Lets think about what really needs to happen to render a page, and how Zune conspires against us. I'll start by describing the obvious implementation of this mess, which is what I had before this week. In the beginning, we have a pristine WebKit_View object, with no drawing in it and no child widgets. Lets assume though, that WebKit has already loaded a page internally, because the initial page load has a couple of extra twists and this description is already complicated enough.

At the moment the application window appears (or the view is added to the UI in some other way), the magic begins. The view is asked for its dimensions, which are typically "as much as you can give me". Next, the view is asked to lay itself out via MUIM_Layout. This is actually a private method, and not one we're supposed to override, so we let that go through to the view's superclass, Group. It gets its list of sub-widgets, finds it empty, and so does nothing.

Next, MUIM_Show is called on the view. This is the first time the view knows the exact dimensions it has been given by the window, and so we tell WebKit the new dimensions and ask it to layout the page based on this size. Once thats done, the window calls MUIM_Draw, which sets up a cairo context over the view area of the window and tells WebKit to draw into it.

The cake is a lie.

If WebKit, during its layout phase, determines that it needs native UI widgets (form elements, scrollbars, etc), it asks the Zune to create them and add them to the view. Unfortunately, at this point the Zune object layout has already been done (we're in MUIM_Show, which runs after MUIM_Layout), so the new widgets have not been asked about their size, have not been placed on the page, etc. MUIM_Draw fires, the view asks WebKit to draw the page and then calls the supermethod to draw the widgets. These unitialised widgets all get drawn with no dimensions at the top-left of the view. This is not what's wanted.

At this point some way of forcing the entire layout process to run again is necessary. This is harder than it should be. You can't just call MUIM_Layout, even if it weren't a private method, because the new widgets have not yet been queried for their sizings. There appears to be no standard way of forcing the layout process to run. In the end I've abused a feature of the Group class to do what I want. The usual way you'd add widgets to a group is to call MUIM_Group_InitChange on the group, followed by one or more calls to OM_ADDMEMBER or OM_REMMEMBER. Once done, a call to MUIM_Group_ExitChange "commits" the changes by making the whole window relayout and redraw from scratch. To force the layout to happen, I simply call InitChange followed by ExitChange with no widgets added in between.

(Coincidentally, I used to use these methods when adding the widgets to the group in the first place, but stopped because it was causing a redraw every time. Now I simply use OM_ADDMEMBER and OM_REMMEMBER and assume that the layout and draw will be done elsewhere, which is correct conceptually).

The one chink in this method is that ExitChange eventually causes all three stages of the render process to run - sizing, layout and draw. We're already inside the layout section, and so we don't want everything to run again. Specifically, we don't want this secondary render process to cause WebKit to do another layout, and we don't want it to draw either, as that will be handled by the original render process. Some flags in the view object to record and detect this reentrancy are all that's required. So the final process becomes:

  • Render process triggered
  • (internal) setup widget dimensions
  • (MUIM_Layout) widget layout (ignored)
  • (MUIM_Show) WebKit layout
  • (MUIM_Show) force second render process
    • (internal) setup widget dimensions
    • (MUIM_Layout) widget layout
    • (MUIM_Show) WebKit layout (ignored)
    • (MUIM_Show) force second render process (ignored)
    • (MUIM_Draw) draw everything (ignored)
  • (MUIM_Draw) draw everything

Do you see what we did there? We just bent the Zune render process to our will by turning it inside out :) There's a couple of other warts thrown in to the mix to deal with a couple of edge cases, but thats basically it. You can read the dirty details in webkit_view.cpp.

Now I have no idea if this is compatible with real MUI. MUIM_Layout is actually private in MUI, but public in Zune, so I wouldn't be able to override it there, but the override could probably be done well enough in the custom layout function. I'm not overly concerned if its not compatible; I'm not developing for MUI after all, but I am curious.

This all points to what I believe is a fairly major design flaw in MUI, that being that the stages of the render process are fairly tightly coupled. There should be a direct a way to force a single object to relayout itself from scratch, and doing it without triggering a redraw. There should be a way to get dimensions recalculated. I suppose its not unreasonable that these things can't be done directly as its probably not often that an attempt is made to bolt an entirely seperate layout engine to it. I suppose it is a testament to MUI's flexability that I can twist it like this at all.

Next up is to get the scrollbars hooked up to the page. After that is the RenderTheme implementation which gives all the other widgets necessary to view pages with forms. A little input handling after that and then we'll have something usable on our hands!

monday, 18 february 2008

posted at 11:03

A couple of hours work on yesterday's effort, and we see this:

Had I known just how close I was, I probably wouldn't have even bothered posting yesterday.

The wonky text was because a stupid assumption I made in cairo's font code, which I've now fixed. The text still looks crap, mostly because of issues with the renderer, but I've been pointed at TTEngine this morning which looks much more like what I want and would let me remove some of the hacks I've had to do in cairo. I'll be looking at this further this week.

There's still a hell of a lot to do, so don't get too excited. At least now I have a way to see whether or not my changes are actually doing something or not.

I'll be posting many more screenshots as work progresses, but I won't be blogging them all. Things are moving just too fast for that. If you want to follow the screenshots, watch my screenshots set on Flickr or just subscribe to its feed.

sunday, 17 february 2008

posted at 09:46
Current progress:

This is WebKit on AROS rendering a trivial page containing a H1, a H2, a DIV with CSS styles set to force to 100x100 with a green background, and a IMG of a pirate, though thats not working yet.

The text alignment appears to be screwy because my code in cairo is not correctly calculating the baseline on tall glyphs. It works as expected from my cairo tests though, so I'll need to dig a lot deeper to figure this out. Likely I just missed some mundane detail; font metrics are actually quite difficult and I'm not help by the fact that the bullet interface doesn't provide away to get the metrics for the font as a whole, meaning I have to generate them in a rather horrible way.

There's also an issue where if a line of text is wrapped (eg if I resize that window to be really narrow), only the last line is rendered. I still haven't looked into that yet. Oh and of course there's a bunch of internal stuff that really isn't correct but won't noticeably affect the outcome just yet.

All in all, not bad progress so far. Its only going to get more difficult as I really get into the details, I think. Not to mention the many many shortcomings in services provided by AROS, which are going to need to be addressed if this thing is to look nice and not be insanely slow. I'll write more about that lot later.

monday, 11 february 2008

posted at 09:07
tags:

AROS work has slowed down over the last week. There's been a lot of email to reply to (I won't mention the topic :P), and I've doing some web work on the side, but I've still had a little time to work on cairo, which I'm now calling finished, at least for the moment.

The big thing I was trying to get going was the shared library stuff; ie having a shared cairo.library. I got this working, but programs are crashing because cairo has globals, a fact that I'd overlooked. It only has a couple, but they're rather important. Once again, I'm not willing to make the large changes required to remove the globals because I want to keep the changes to upstream to an absolute minimum. Once AROS has proper support for global data, then this code can be resurrected.

So cairo works, but is noticably slow. That mostly comes from it doing all its rendering in software and then blitting the results into the AROS graphics subsystem. Working slowly is good enough for me at this stage.

Fonts work, with the following issues:

  • Non-scale transformations (rotate, shear, etc) don't work as the necessary API is not exposed via bullet. Scaling work, buts only vertically - again, missing API. Basically the only cairo API that is of any use for glyph transformation is cairo_set_font_size().
  • The font tests don't pass. The first reason for this is that font sizing on AROS is not the same as on other systems. As far as I can tell the bullet interface to FreeType is recalculating the metrics to better match the traditional Amiga way of talking about metrics, with the downside that it makes the glyphs smaller than they should be. Additionally, there's no way to get the display device DPI under AROS, making it quite impossible to have FreeType adjust the scale appropriately.
  • The other reason the tests don't pass is that spacing between glyphs is wrong. A typical line of test rendered on Linux will have pretty much even spacing between each glyph. The same text rendered on AROS has uneven widths. I haven't been able to determine the cause of this yet.

The font problems shouldn't be an issue for WebKit as it does its own font work, though it will still hit the underlying font system so its likely the same issues will appear in other contexts. Again, I'll just do the best I can.

So this afternoon its back to WebKit! There's been many many changes there in the last month, so the first step will be to just get my stuff building again.

friday, 1 february 2008

posted at 22:28
tags:

As you know, I've been at linux.conf.au this week. There's a lot of cool stuff happening in the Linux world, and a few of those things really made me ache to grab the code and get hacking on them. But even more than the technology, the best thing about Linux is the community. Even when there's disagreement (and there's plenty) the feeling is wonderful because everyone is working hard on the same thing: making computers awesome.

A shortlist of things I'd like to work on:

  • Martin Krafft's netconf network configuration framework. His design is elegant and this is something that Linux badly needs.
  • Rusty Russell's lguest hypervisor which is just beautiful in its simplicity. I've already done some real hacking on this in the tutorial and its very pleasant to work on. I had a chat with Rusty about adding support for gdb stubs (because I like that kind of thing) and it looks like it could be added quite easily. That sort of gratification is hard to come by. Plus I'm feeling happy because I won the prize in the tutorial for the most progress made (four targets reached in two hours). Its some kind of Brazilian liquor called Chico Mineiro that I'm looking forward to trying at the next gaming night.
  • cairo is still outstanding and from its requirements have come some major redesigns of the 2D graphics core in X and below. By the time the wizards are done with it cairo (and others) will be able to get better performance out of 2D graphics hardware than any other platform (Windows included). This stuff is harder to get into but is by no means impossible.
  • The GNOME crew have got some fascinating stuff coming down the pipe that I'd really enjoy working on. Its mostly integrating different types of application to better support social interactions (ie convenient sharing your stuff), which is something I've always had an interest in.
  • I've been gifted an OLPC XO-1. In the immediate future I've decided to let Francesca at it and document her progress, as a kind of observation project. The thing about these machines is that the are purpose-built for sharing and working with others, and the interface breaks all the rules and thus gives heaps of scope for trying new things. Whether she gets sick of it and hands it back or I buy one for myself so that we can play with them together, there's lots I'd like to do with it.

So there, lots of stuff I could do that I'd thoroughly enjoy, that would produce real stuff that would be used on real computers by lots of real people, and that would keep this community buzz alive for me.

On the other side, there's AROS. Now I like AROS because its technically interesting and there's lots of stuff to fix, but previously I didn't really have anything better to do. I still like AROS, but I've found myself this week doing a lot of soul-searching, trying to decide if AROS hacking is really the best use of my time. As I look at what's happening this at LCA this week, its increasingly apparent that AROS, when held up against just about everything else, is insignificant.

I don't have any delusions about AROS ever becoming a mainstream system, and thats fine, because it doesn't need to be to still be considered successful. In order to be successful, it needs a clear plan and goal moving forward (so we can actually measure our progress), and it needs a strong community of developers around it committed to that goal.

As it stands, we have none of that. The community, such that it is, is fractured, which is unsurprising since its a part of the Amiga community and we all know just how much infighting there is and has always been there. In terms of goals, there basically are none. There are those that would argue that "AmigaOS 3.1 compatibility" is the goal, which I'd answer by either saying we're already there since most source from the 3.1 era will compile and work with no or only minor tweaks, or that the goal is irrelevant since there's nothing from the 3.1 era you'd want anyway.

If we are to be a clone, then we're still a long way away - AROS can't even run on real Amiga computers! We're incompatible in a number of ways, but those ways are only important for binary compatibility, which we don't have. On the other hand, if you have the source, perfect compatibility is not really an issue as you can modify the application for the differences. But like I said above, there's nothing from the old days thats worth bothering with.

In the absence of real goals, I set my own personal goal for my work on AROS, which is to get it to a point where I could run it as the primary OS I use day-to-day on my laptop. That's a huge task, as my laptop is something close to an extension of my brain. AROS would need to at least be able to do the following to supplant Linux there:

  • Web browser
  • SSH client
  • Fully xterm-compliant console
  • Stable and fast filesystem
  • X server (for remote applications)
  • Perl
  • Proper support for my laptop - wireless, idling, suspend, etc
  • Some way to run certain Windows apps (like VMWare, qemu, etc)

It should be clear that there's more to it than just this list - a massive amount of work needs to happen under the hood to support all this.

As you can see, my aims are very forward looking, and make no provision for backward compatibility. This is causing some problems as I try to progress things. An example is my recent work on cairo. AROS graphics APIs are broken in the way the handle certain things related to alpha channels. Unfortunately this can't be changed without breaking backward compatibility. As such, I've implement a particular fix in four different ways over the last two weeks. The first three have introduce compatibility issues and I've had to remove them. I'm hopeful that the current one will not introduce any further issues, but I hoped that last time. Even if it does stick, I still needed a pretty nasty and performance-degrading hack in cairo to finally get what I wanted.

Obviously, this is frustrating. Perhaps it wouldn't be so bad if everyone was at least trying to move forward, just breaking as little as possible in the process (something I agree with), but there is an entire camp that appears to want backward compatibility at the expense of all else.

If I haven't been clear yet, I don't think that this is a bad goal. I have no issue with people wanting things that are different to what I want. The problem that I have in this case is that I don't see that the two positions can ever be reconciled as they're fundamentally opposed.

So I'm frustrated anyway, and then I go to a conference and hear and see amazing things by focused and motivated hackers, and I get even more frustrated because I want what they have. I want to work with these people on code that matters with the confidence that we're all moving in the same direction. This is why I'm starting to wonder if AROS is such a great place for me to be.

I've had some discussions in #aros about this, and the idea of forking the project often comes up. I've considered this in the past, but I've so far resisted for a few reasons. From the practical side its a pain because I'd have to setup the website, repository, etc and do admin stuff, write a lot of email, write a plan and other project governance stuff. Socially it always sucks to split a community. I'm starting to think that if I want AROS to move forward, I may not have much option.

The important thing that would have to happen before a fork is to very clearly determine what I want not-AROS to be. I think "modern" and "Amiga-like", or perhaps "Amiga-inspired" are probably the simplest ways to describe where I think things should go, but we have to specifically define those terms. "Modern" is pretty straightforward: the goal should be that if I put not-AROS on my computer, it will make efficient and effective use of my multiple processors, my piles of RAM, my wireless controller, my USB devices, etc. I should be able to use my computer to do any task that I can do currently in Linux or Windows. That of course requires a lot of applications to be written, but there should be nothing inherent in the system that prohibits those applications being made.

"Amiga-inspired" is a little more difficult to define. I've asked a few of the AROS crowd, and nobody seems to really be able to quantify it, which I find surprising since they're usually the advocates for it and came from Amiga in the old days. Perhaps its one of those cases where its difficult to define what you know because its been obvious for so long.

I don't have an Amiga heritage, coming from Unix as I have, so perhaps I can do better. Since I have no issue with changing the internals, we should start by looking at the Amiga from a user perspective. The major thing is that the system is designed to be GUI-driven, and as such the primary interface is a light and functional GUI. Unix of course is the other way around, where the command line reigns supreme.

The next major thing is the fast booting time. An Amiga system was typically ready for use within seconds of starting. Interestingly, if you measure the boot time as being from the time when the bootloader first hands control to the system to the time when the primary interface can be used, Linux actually only takes a few seconds too. The standard Unix boot sequence generally readies all system services before giving control to the user, whereas the Amiga was more likely to load things it needed on-demand. This made sense given the small amounts of memory available to the system, but that does not mean that its not a good model even for a modern system (though more options exist given the available resources, like starting services in anticipation of their use).

Much of this is enabled by the extremely light microkernel architecture. There's so little structure that system processes actually run much closer to the metal than they wood on other systems. I'm not sure how sustainable this would be as more features and system services are added, but neither have I had much chance to think about in detail. I see no particular reason why it couldn't be kept light if it was always being considered at every stage of development.

So to summarise, not-AROS would:

  • Boot fast
  • Assume a GUI (but see below)
  • Not keep stuff around that isn't needed
  • Keep the microkernel vibe
  • Let you do what you want without getting in your way

A word about the GUI. I'm a command line junkie. I type fast but am really uncoordinated when it comes to using the mouse. So my personal requirement (and I get to have them if its my project) is that everything you can do in the GUI you can do via the command line, and vice-versa. That requirement is fairly straightforward to achieve by seperating function from form - guts in a library, with UI of any type that calls it. Remotely controlling GUI applications is also something that Amiga has a history of with AREXX ports and the like.

And so then we get to backward compatibility. The fact is, I don't care. My not-AROS would not be an Amiga clone. It would try to follow roughly those points above but would be happy to break the rules when they don't work. It would aggressively steal from other systems past and current, both in ideas and in code. Additionally, once implemented, I would not be afraid to gut the internals and redo it if it became clear that we did it wrong the first time.

So there's high-level goals. They're deliberately nonspecific, which is what you want at that level. For the actual development cycle, I'd probably aim for regular releases (depending on available developers) each focusing on one or two specific areas. There'd be no nightly builds. You either get the source and build it yourself, or you wait for a release. I have ideas already about what I'd work on and change and in what order, but I'm not going to write about that here because the tasks are actually somewhat irrelevant.

From where I sit right now, AROS is in an untenable position. In my opinion it cannot get to where I think it could by continuing to be managed the way it is.

What will I do? For now, I'm committed (and still enjoying) my work on WebKit and cairo. I will complete the Traveller bounty. At that time, I'll consider my options, which will be three:

  • Abandon AROS development altogether and go and work on Linux stuff, and enjoy myself, but always wonder what might have been.
  • Continue work on AROS and likely continue beating my head against the wall until I finally explode.
  • Fork AROS and see what happens with the high likelihood that it will go nowhere and wast a lot of my time, with the guarantee that a good amount of my time will be sent managing the project rather than writing code.

What would be great would be if the AROS crowd managed to make a hard decision one way or the other before I have to decide properly. It won't happen, but it still would be very nice.

So thats it. Thats about the sum total of my thinking this week. If you're going to add a comment, please make a good argument for or against what I've said. This is actually a serious post, and I'm not interested in hearing from the fanboys this time around. If you post "I agree!" or "don't ruin AROS for everyone!", expect to have your comment deleted. And if you are going to disagree, make sure your have a pretty solid argument to back up your position because you'll be wasting your time if you don't - I've agonised over this stuff this week and I'm quite sure of my own position.

wednesday, 30 january 2008

posted at 17:24
tags:

I just got given a OLPC XO-1 on the proviso that I use it for "something cool", otherwise hand it on to someone who will do something cool with it.

I'll take the night to consider what I might like to do with it and decide whether to keep it or pass it on tomorrow. AROS would be cool, of course, but I don't think thats quite what they have in mind. I'll talk a bit to some folks at the conference dinner tonight; maybe they'll know what to do.

tuesday, 29 january 2008

posted at 10:12
tags:

I'm at linux.conf.au this week and because I'm so well practiced at listening to people talk while doing something unrelated on the laptop (thanks dayjob), I've got a hell of a lot of code done, making up for the nothing I did over the weekend.

Yesterday I finally got text rendering happening via cairo:

There's not really a lot to say about it. The hardest part has been converting the glyph metrics that come back from the bullet glyph engine into cairo's glyph metrics, as they haven't a slightly different view of the world.

The code is still rather messy and incomplete. I still have to handle the font matrix which will allow arbitrary scalings, rotations, etc. Smarter font selection is needed as well as using the algorithmic emboldening/shearing stuff to provide fonts that don't exist. At least its all downhill from here.

tuesday, 22 january 2008

posted at 13:29
tags:

Things got a little slow in the last week. I spent last week tweaking bits of graphics.library and graphics.hidd to force the alpha channel to be set when calling ReadPixelArray() on a surface that has no alpha (so it can be fed directly to a cairo surface with alpha). Each attempt worked, but also introduced subtle incompatibilities into the Cybergraphics API. I still think its important to have (along with software alpha compositing, which is an entirely seperate issue), but it can't be done comfortably via the current API, so for now I just post-process the returned pixel data and force the alpha channel on before handing it to cairo. I don't like it, but it will do, and it makes it possible to use any system bitmap as a source. So now you can use cairo to take a snapshot of the entire screen with this simple code:

    struct Screen *screen = LockPubScreen(NULL);
    cairo_surface_t *surface = cairo_aros_surface_create(&screen->RastPort, 0, 0, screen->Width, screen->Height);
    cairo_surface_write_to_png(surface, "snapshot.png");

I've now turned my attention to the font backend. Its taken me a while to even begin to understand it, because I know basically nothing about font rendering, but I think I'm at least starting to see what's going on. I began an implementation based on the graphics.library functions for font rendering, but it really felt wrong as the interface really doesn't seem to support much - very few facilities for controlling rendering options, limitation to ASCII, etc. It seemed that there must be something more powerful available, as its clear from just loading up AROS that we support TrueType fonts and non-ASCII characters.

After a lot of digging, I found out about the existence of the bullet.library interface for outline fonts, and our implementation of it in freetype2.library. From there, to Google, where I discovered that there's next to no documentation out there for it. I did find reference to a document in the AmigaOS 3.1 development kit, and a quick ask-around in #aros gained me a copy of BulletLibrary, which I offer here for reference.

The interface is complicated, but appears to have most of the features I need to map to cairo font functions. I have no idea how it will go, and I imagine our implementation is deficient, but I will write some tests this afternoon and see what I can do with it, then start hooking it up to cairo.

friday, 18 january 2008

posted at 20:06
tags:

As far as cairo is concerned, its backend buffer get/set methods are only required to store and retrieve pixel data in the format requested by the cairo core. It does not have to do fancy conversions. It does not have to do alpha stuff. Presumably you'd want it be convertible to the host graphics system, but cairo itself doesn't care about that.

wednesday, 16 january 2008

posted at 12:21
tags:

Cairo is working! So far I have RGB and ARGB surfaces working, and so still have alpha-only surfaces and fonts to do, but that is enough to make the majority of the test suite work. I actually had the basics working on Thursday, but the colours were all messed up, and it took five days to track down all the issues and fix them. I won't go into the process, because its peppered with dead ends and misunderstandings, but here's what I've learnt:

  • CyberGraphics is a big-endian interface. That is to say, when you request ARGB, you will always get the same byte ordering on little and big-endian machines. This is different to cairo, where specifying ARGB will get you the ordering of the local machine. What this means is that on little-endian machines when converting from AROS bitmaps to cairo surfaces, I have to request BGRA format from ReadPixelArray() but then tell cairo its ARGB, and vice-versa.
  • When AROS converts from a bitmap with no alpha channel (eg RGB) to one with alpha (eg ARGB24), it will set the alpha in the target bitmap to 0 (fully-transparent). When feeding the target into cairo, which knows about alpha, it basically does nothing as it sees that all the pixels are fully transparent. I've already done a rather naive fix in AROS for one case, but there's still a case where the graphics library, realising that a conversion from a non-alpha format to a 32-bit with-alpha format is requested, rewrites the target format to be 32-bit no-alpha (eg 0RGB), thus leaving the alpha set to 0 again. I'm working on a more generic fix.
  • WritePixelArray() has no support for software alpha compositing. That is, when using it to blit a 32-bit with-alpha bitmap to another bitmap without alpha, the alpha component is ignored rather than computed in software. Ironically, alpha compositing code exists for WritePixelArrayAlpha(), so I'll also be looking at factoring this code out into a generic function and having both calls use it.

Once I get this sorted, I have a very cute piece of eyecandy in the works to demonstrate to you all just how powerful cairo is, and just how easy it is to use. Hopefully I'll have something to show in a few days, then I'll get back onto the font support.

monday, 14 january 2008

posted at 12:23

A year ago today I made my first commit to the AROS Subversion repository. It feels like I've been doing this forever, not only a year. I've been digging back through the history to see what I've been up to over the last year.

Here's a list of things I've done that I think are worth noting:

  • 21 January: tap.device network driver for Linux hosted AROS
  • 4 March: DOS packets and initial FAT filesystem support (minimal read-only)
  • 27 April: FAT write support
  • 7 May: Fast bitmap scaling, made Wanderer startup faster and made FFE usable in hosted and boosted it from 8 to 20FPS in native
  • 16 May: FAT notifications
  • 20 May: PA_CALL and PA_FASTCALL taskswitch and lock-free message ports for speed
  • 8 June: GetDeviceProc() and ErrorRerport() rewrite and internal DOS refactoring
  • 17 June: Pipe() DOS function
  • 21 September: hostlib.resource for calling host libraries from inside AROS
  • 2 October: Converted X11 driver to use hostlib and moved it out of ROM
  • 3 October: SDL driver
  • November (and ongoing): C/POSIX library improvements
  • 17 November: Math library upgrade
  • 3 December: thread.library
  • 13 December: ELF loader support for large objects

There's also a pile of tweaks and fixes that don't feature in this list. According to git, I've made 269 commits to the core AROS repository, adding 23182 lines and removing 12741 lines.

In addition to this, I've got plenty of work-in-progress stuff that hasn't (or won't) hit the AROS repository:

And of course, the Traveller-related work:

2008 should be a bit more focused for me, as most of the first part of the year will be working on getting Traveller out the door, and then on a few big supporting things like SysV shared object support. I don't think it'll be any less interesting as a result :)

Thanks to everyone who has helped and guided me through the many many mistakes I've made, particularly the freaks in #aros. The major reason I'm still here and wanting to work is that is fun, nothing more. Cheers lads :)

wednesday, 9 january 2008

posted at 09:17

With the help of WebKit developers I finally sorted out the crasher that plagued me over Christmas, and now I see WebKit making network requests, receiving data and calling into the graphics code to get it on screen. The next step is to begin implementing this graphics code.

As far as I can tell I need support for both blitting objects (like images) to the screen, but also need drawing primitives, both simple stuff like lines, circles and rectangles as well as complicated things like Bézier curves and arbitrary paths. It needs to be able to apply a transformation matrix to both paths and images. It needs compositing support. It also needs to be able to operate on screens of arbitrary size and depth.

AROS (and the standard Amiga API) can't support this. Some of it exists, just not enough. graphics.library has basic drawing primitives but not advanced stuff like splines and such. Its primitives don't operate reliably on TrueColor screens, which is what pretty much everything is these days. CyberGraphics provides access to higher-depth modes, but only really for blitting. And we have no support for affine transforms, compositing, or other advanced features.

To Commodore's credit, its pretty clear that they were moving in this direction. They had these concepts on the market in a time where they were barely even considered elsewhere. I'm quite sure that were they still around today we'd have these features available. Sadly, we don't, so we must find another way.

I've studied the problem in some depth, and I've decided to port the cairo graphics library to AROS. Their description sums it up well enough:

The cairo API provides operations similar to the drawing operators of PostScript and PDF. Operations in cairo including stroking and filling cubic Bézier splines, transforming and compositing translucent images, and antialiased text rendering. All drawing operations can be transformed by any affine transformation (scale, rotation, shear, etc.)

A port will be a good thing for everyone. WebKit already has a cairo graphics target, so I'd get my rendering for free. The library is extremely portable, with a great target abstraction. Indeed, I already have the thing compiling and the AROS backend stubbed.

More controversially, I think cairo could actually become the core 2D graphics system for AROS. graphics.library could be trivially implemented on top of it for compatibility, so there's nothing to worry about there. We'd implement a cairo backend that talks to a stripped-down version of our graphics HIDD layer (as much of their functionality would no longer be necessary). Once it place it would give easy support for eyecandy like real transparent windows or something like Exposé. Combine that with the plan to get 3D via Gallium, and AROS could become the shiniest thing out there.

My port will be a proper AROS-style shared library, cairo.library. Cairo's code is clean enough that I think I can do this without requiring the API to change and while still making it possible to contribute all the changes upstream without adversely affecting them.

Port repositories: cairo and pixman. These will be combined in the final library.

monday, 7 january 2008

posted at 15:41

Christmas and New Year festivities are over, and I enjoyed them thoroughly. I spent some awesome time with both sides of my family, played some cricket and soccer, played some Wii, ate way too much several times, and scored a nice pile of DVDs and t-shirts. In the long drives between various parties and dinners I've had a lot of time to ponder a WebKit problem, which I document here :)

WebCore has some functions that arrange for a simple timer to be implemented. Its very basic; there's three functions: one to set a function to call when the timer goes off, one to set an absolute time that the timer should go off, and one to disable the currently set timer. This simple interface is used by the higher-level Timer class which can be instantiated multiple times. It handles coordinating the current timers and making sure the system timer is requested at the proper interval.

I did a first implementation of this using timer.device directly, but it really didn't feel right. The interface has no provisions for initialising or finalising the timer, so I hacked it such that the first call would open the timer device if it wasn't already open. I ignored the finalisation for the time being, and started looking at how to arrange triggering the timer.

We're back to the old problem that AROS basically does not have any provisions for signals/interrupts that preempt the running process in the process context (actually, task exceptions can, but they're too low-level for our purposes and don't work properly under AROS anyway). When timer.device fires, it pushes a message onto the IO request port, which either raises a signal (MP_SIGNAL port) or calls a function directly from the scheduler context (MP_SOFTINT port). There's also MP_CALL and MP_FASTCALL ports; these are the same as MP_SOFTINT for our purposes.

Having a soft interrupt that calls the timer callback doesn't work, as it would cause us to do large amounts of work inside the scheduler which is bad for system performance. Having a signal requires the main process to Wait() for that signal and then call the timer callback. The main loop is controlled by the application and by Zune, both things we have no control over.

I confirmed via #webkit that the timer callback is indeed supposed to be called from UI main loop. Studying the MUI docs and the Zune code, it seems that it is possible to have the Zune main loop setup a timer and trigger the callback itself using MUIM_Application_AddInputHandler. This is perfect for our needs, as it removes any need for initialisation and finalisation in the shared timer code itself.

The only thing that has to be arranged then is for the shared code to get hold of the application object to setup the timer. The application object is created and controlled by the application, of course, but there is only ever supposed to be one of them per application, and I can't think of a good reason why there should ever be more than one. Its easy to get hold of this object from any Zune object inside the application, via the _app() macro, with the slight quirk that its only available when the object is actually attached to the application object. We can detect that well enough though and defer calls into WebKit until we're attached, so all that remains is to grab the application object, stow a pointer to it in a global variable, and then have the shared timer code use that variable.

This all took me a few hours to work out, and then I happily went off to do Christmas things. Over the next couple of days, the nagging seed of doubt that I had in the beginning grew into some kind of spooky pirahna flower thing. This morning while hanging clothes out to dry I finally understood the issue. Its all to do with how global variables work, and its has much greater implications for this project than just getting hold of the Zune application object.

Lets think about what happens when you load a program into memory. Forgetting about the details of the loader doing relocations, setting up space for variables, etc and the program startup making shared libraries available, effectively you just have the system allocating a chunk of memory, loading the program from disk into that memory, and then running the code within it. Space for and initial values for global variables are all held within that chunk of memory, and only the program code knows where they are and what they're for. Nothing else on the system can reasonably access them so there's nothing to worry about.

A shared library is essentially the same as this, except that it is only ever loaded into memory once. When a second program requests it, the systems checks if the library is already in memory, and if it is arranges for the program to use it. This is where things can get complicated. The big chunk of memory contains some things that are sharable because they can be considered read-only - things like program code, const data, and so on. Regular global variables are generally not sharable, as you generally don't want changes made by one process to be seen by another.

In systems that have a MMU, the usual way that this is dealt with is to make a copy of the global data somewhere else in memory, and then map it into the process address space at the appropriate location. That is, process share the read-only parts of the shared library, but have their own copies of the writable areas. (In practice its quite a bit more complicated, but this is the general idea).

AROS, like AmigaOS before it, has all processes, libraries and anything else coexisting in the same memory space. Shared libraries pretty much don't use global data. There is no support for MMUs so the kind of copying and remapping descibed above is impossible. If per-process data is required, then various techniques are employed explicitly by the shared library author - per-opener library bases, data access arbitration using semaphores, and so on. That works fine, because the author is fully aware of these limitations when he designs and implements the library.

Its worth noting that this problem is not isolated to AROS, but to every system where a MMU is not available. uClinux has had the same issue in the past and dealt with it in a couple of different ways.

Now lets look at what I'm trying to do. My goal is and has always been to make WebKit a shared library (actually a Zune custom class, though as far as the OS is concerned its the same thing). WebKit and its dependencies all make use of global variables as necessary, and assume that their globals are isolated to a single process, which is a reasonable assumption given that basically every system out there that WebKit currently runs on works this way. For AROS though, this is a huge problem.

The cheap way out is to just ignore the whole mess by producing a static libWebKit.a and requiring any applications to link it statically. This is essentially what I'm doing now. It works well enough, but currently the (non-debug) library weighs in at a touch under 18MB, and thats with barely any AROS-specifics implemented. For every WebKit-using application you have running, thats at least 18MB of duplicated code that you have to hold in memory. There's also all the usual issues with static linkage: greater disk usage, no ability to upgrade just the library and have all its users get the update, and so on.

The least favourable option would be to rewrite all the parts of WebKit and its dependencies that use global variables and either find a way to remove them or otherwise move them into a per-process context. This is horrendously difficult to do and would pretty much remove any hope of contributing the code back to its upstream sources, which I consider an imperative for this project. So lets say no more about it.

The only other option is to add support to the OS to do the appropriate remapping stuff. This is no small undertaking either, but I think as time goes on, its a very good thing for us to have. I haven't investigated it in depth, but in addition to actually implementing the stuff in the loader, its also necessary to make some changes to the way modules are held in memory and shared between users.

Currently a module can exist in memory and be used as-is by multiple users without too much effort. Because there's no global data, sharing a module is as simple as incrementing a use count, so that the module isn't purged from memory ahead of time.

When sharing an object with global data, in the absence of a MMU, its necessary to allocate new global data for each opener and do its relocations each time. This requires keeping a record of the required relocations. There's also the issue of constructing the global offset table and the procedure linkage tables, and making sure the pointer to the GOT is carried around the application appropriately. Work that will be usefel here is Staf Verhaegen's current project on library bases and preserving the %ebx register. Of course this will all have to integrate nicely with that.

Then there's also the matter of detecting when to use all this new stuff over the standard loading and linking code. I think I can make that as simple as requiring all code to be shared in this way be position-independent (ie compiled with -fPIC). Code compiled in this way is incompatible with the standard load method anyway, and for this type of shared object its far simpler to implement this whole mess if PIC is enabled. If it is, then detecting which type to use should be as simple as looking for the presence of the .got section in the object.

Thats about as far as my thinking on the matter has come. The shared timer stuff that originally provoked all this is working happily, but if WebKit is ever to be a shared object on AROS, all this will need to be revisited. Because its such a huge undertaking I'm going to leave it until after the WebKit and Traveller are in some kind of usable state. At that time I'll look at handing off care of the web browser to someone else for a little while and work on this stuff instead.

tuesday, 1 january 2008

posted at 22:53

Hi. I have lots to tell you, but haven't had time to write it all down yet. But I wanted to share this, the very first web request ever done by WebKit on AROS:

GET / HTTP/1.1
Host: 192.168.0.1:8080
Accept-Encoding: deflate, gzip
User-Agent: WebKit AROS
Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5

I'll post more details sometime in the next couple of days. Happy new year :)

wednesday, 19 december 2007

posted at 22:06

This is week is insanely busy, as is typical of the week before Christmas, so I've had very little time to think about code in the last couple of days. I therefore opted for something that wouldn't steal too much of my brain, and began stubbing the Zune View class.

The interface will be typical Zune stuff. To get a web renderer into your app, you'll include a WebKitViewObject in your widget tree, and go from there.

The launcher is just a fairly standard Zune application setup. It will get a little more code before the end, mostly adding basic navigation buttons and location bar, but the basic structure won't change. This will serve as both a test program and an example of how to use WebKit in your own applications.

tuesday, 18 december 2007

posted at 11:17

Now that I've (apparently) fixed the loader, my mammoth WebKit test binary loads and runs, and so I've begun implementing the stub functions with earnest. To start my method has been to run the program until it crashes, find out where the crash happened, which is usually a NULL pointer dereference, and then provide a basic implementation of the class that that thing is supposed to be pointing to.

The current problem is a crash that occurs inside a regular method call, for no apparent reason. The offending method, in its entirety:

void DocumentLoader::setFrame(Frame* frame)
{
    if (m_frame == frame)
        return;
    ASSERT(frame && !m_frame);
    m_frame = frame;
    attachToFrame();
}

Good old printf() tracing shows that the crash occurs after m_frame = frame but before attachToFrame(). That is, that method is never called. This is highly unusual, and tedious to debug, because it means we have no choice but to drop down to assembly code, which I can muddle through well enough but can't really wrap my brain around.

Disassembling the last two lines of the method, we get this:

    mov    0x8(%ebp),%edx
    mov    0xc(%ebp),%eax
    mov    %eax,0xc(%edx)

    mov    0x8(%ebp),%eax
    mov    (%eax),%eax
    add    $0x8,%eax
    mov    (%eax),%eax
    sub    $0xc,%esp
    pushl  0x8(%ebp)
    call   *%eax
    add    $0x10,%esp

The pointer to the current object, this, is on the stack, 8 bytes in, as is the frame pointer, 12 bytes in. So we see the value of this being dereferenced through the stack and stored in %edx, and then the same for the frame pointer, being stored it in %eax. Then the location 12 bytes into the object proper is computed (which is where m_frame is stored), and %eax (the location of the frame object) is stored in it. Thus, m_frame = frame.

The next chunk, predictably, is the call to attachToFrame(). The important thing about this method is that its what C++ calls a virtual method. It wasn't until Friday that it was actually explained to me what that meant, and I found it hilarious. Consider:

    Object *o = new Object;
    o->method();

    o = new SubObject;
    o->method();

(where SubObject is a subclass of Object).

Now, if method() is a virtual function, this will do what you'd expect from most other OO languages: the first call will call Object::method(), the second calling SubObject::method(). If its not virtual, then both calls will go to Object::method, because its taken from the type of the pointer, not the type of the object itself.

I don't know if this was considered counterintuitive when it was first designed, but its certainly not the way most OO languages work these days. Usually you have to be explicit when you want to call a superclass version.

In any case, the code generated is different. In the simple non-virtual case, the call can be done via an absolute address, as the compiler can know exactly where the method() function is for the type. The virtual case is more complicated as the object itself needs to be interrogated to find out where its function is.

To do this, a table for each class that the object inherits from is placed inside the object, containing pointers to the functions that the object wants to use for its virtual methods. A virtual method call might then be rendered in C as:

    o->_vtbl_Object.method();

That is, go through the table of implementations of methods defined in the Object class to find the method, and call it.

So, getting back to our disassembly. attachToFrame() is a virtual method. The code gets this from the stack, 8 bytes in, and puts it in %eax. Then it dereferences the pointer to find the actual memory location of the object. It then adds 8 to that to get the location of the virtual method table, and dereferences that to get a pointer to the attachToFrame() function, which goes into %eax.

Then it does the usual function call setup, making room on the stack for the arguments and return address, and then calls the function at the location in %eax. It is here that the crash occurs, because %eax has 0 in it.

I was floored when I first saw this. I checked a number of times in different places, finally checking the constructor itself. And sure enough, the virtual table contains all zeroes. To me this smelt suspiciously like a relocation problem - if the the ELF loader is not correctly doing the relocations for virtual tables, then they'll point to garbage memory, causing a crash.

I'm not entirely sure how this can be, and haven't figured it out yet. I need to check the place where virtual table is normally initialised, but I don't know where that is! I can theorise by thinking about the structure of an object and the virtual table internally.

The first critical thing is that the virtual table inside the object is a pointer. That is, when the memory for the object is allocated space is not allocated for the virtual table too. A pointer needs to be to point to a valid virtual table. There's two ways this could be done: setting a pointer to some known static data that contains the data for this class, or allocating some more memory and copying the pointers from same known static data.

The former seems the more likely to me. The extra allocation and copy seems unnecessary as the table for the object will not change during the lifetime of the object. There are seperate tables for each class the object inherits from, so there's no need for a group of tables to be mixed into a single one.

So given that as a theory, we should be able to find some code somewhere around the constructor that sets up the virtual table pointer. It'll probably be the first thing after the memory allocation is done. This code might not exist in the binary itself though but may be part of a language support library (libgcc or similar). Regardless, the thing that will need to be there is the virtual table location.

I'm expecting to find that the location of the virtual table is not being relocated properly by the ELF loader. Basically, I trust GCC to produce correct code than I trust our loader to do the right thing. The problem could also be within our linker, collect-aros, but its so simple that I'm happy to rule it out initially.

Stuart, get back to work!

Update 3pm: Found it. I missed one section header table index conversion when I was updating the loader for large numbers of sections. Stupid, but it never hurts to exercise my brain on the really low level stuff.

thursday, 13 december 2007

posted at 22:04
  • mood: hobbitish

I just now have the extensions to the ELF loader implemented such that my gargantuan WebKit test binary loads. It took me a lot of reading and experimenting to figure out what was going on but I got it.

In my last post I talked about how files with large numbers of section headers store the count of headers elsewhere in the file. I'd taken care of that just fine. The other important thing that I missed is that every entry in the symbol table has a section header index that points to the section that the symbol is relative to. Of course this is a 16-bit field also, and has the same problem as the header count does.

The solution to this one is even more crazy. Basically there's an entire extra section in the file that is just an array of 32-bit values. If a symbol refers to a section with an index that is too large, you basically go fishing into that array to find the index instead. This of course means that I have to have that array loaded and available before I start doing symbol table work.

Finally, something that confused me until I put together some nice macros to deal with it was that there's a "hole" in the range of possible section header index numbers. What used to be the top 256 values (0xff00 to 0xffff) are reserved as special control codes, markers and other such things. Now that the header number is fudged into 32 bits, we get the situation where the header at index 65279 (0xfeff) corresponds to section 65279, but the header at index 65280 actually corresponds to section 65536 (0x10000). So basically, anywhere that a section number is found in any of the ELF structures, it has to be massaged into a header array index number taking the hole into account. This caused no end of issues, particularly since my file has hundreds of effectively unused sections - it was hard to even see when it was going wrong!

So now ArosLauncher loads and runs and I get some debug output before it crashes:

UNIMPLEMENTED:
(../../../WebCore/platform/aros/TemporaryLinkStubs.cpp:42 void WebCore::setFocusRingColorChangeFunction(void (*)()))
UNIMPLEMENTED:
(../../../WebCore/platform/aros/SharedTimerAros.cpp:10 void WebCore::setSharedTimerFiredFunction(void (*)()))

Before I get back into WebKit though I need to cleanup this code and commit it. I still need to merge the other two ELF loaders. As far as I can tell from a cursory glance the elf64 version is basically the same but using 64-bit definitions, macros, etc as defined by the ELF specs. The other, elf_aros, I'm not entirely sure about but its certainly much simpler. Its possible it just hasn't been looked at for a long time (the changelog certainly appears to show that). I'll continue to try to figure out what its for, but my feeling is that it can probably go, and elf64 and elf can be comfortably merged with a little restructuring of the code.

One thing that has become apparent is that our loader is incredibly slow and rather naive. As we start implementing more features (debug support, memory protection, shared objects) I don't think its going to cope well with its current structure. And its certainly got its work cut out for it - I've been reading Ulrich Drepper's paper "How To Write Shared Libraries", and it goes into a lot of detail about the kind of pain the dynamic linker has to go through to make things work. The glibc loader is something I'll have to study, I think.

wednesday, 12 december 2007

posted at 13:56
  • mood: elvish

I wrote a simple launcher for WebKit that creates a WebCore::Page, attaches it to a WebCore::Frame, then tries to load the Google homepage with it. Unsurprisingly, when I ran it it crashed, as most of my factory methods just return NULL. I fired up the debugger and figured out where the crash was coming from, and found it was in FrameLoaderClient::createDocumentLoader, one of my factory methods. Curiously, this function calls notImplemented(), and so should have printed something to the console. A little poking revealed that I had been done a release build, not a debug build, so I recompiled with --debug.

The resulting binary was almost three times the size, up around 300MB, which makes sense because its now carrying almost the entire source code for debugging as well. I had to start AROS with -m 512, to give it enough memory to actually be able to load the thing. I started AROS, opened a shell, started ArosLauncher, and then the amazing fireworks began.

On my debug console, I got a line of output:

[LoadSeg] Failed to load 'ArosLauncher'

Thats a problem - LoadSeg() is the program loader/linker. More exciting though was the line after line of pure binary appearing in my AROS shell. Do something like cat /bin/ls to see what I mean.

My first thought was that the awesome size of the binary was trampling something in memory, but a bit of poking around revealed the answer. When you type a command into the shell, it tries to load it as an executable file. If that fails, it checks if the file has the script flag enabled. If it does, it calls C:Execute with the file as an argument. Execute is the script runner, and it simply feeds the contents of the file into the shell's input buffer to be executed as though the commands were being typed.

Execute doesn't have any smarts to determine if what its being passed is really a script; that would be a useful feature for it to have. The real issue though is that the ArosLauncher binary had the script flag. I never set it, it shouldn't be.

Closer inspection revealed that the hosted filesystem driver, which maps Unix file permissions to AROS file permissions, was setting the script flag for every file without exception. That was perhaps a reasonable choice at the time it was written, as Unix does not have a script flag or anything similar it wouldn't have been immediately obvious what to map it too and it was never used in AROS anyway until recently (the shell gained support for testing for it and calling Execute a couple of weeks ago). Clearly though its not write, so I had to do something. I modified the permissions mapping code in emul.handler to map the AROS script flag to the Unix "sticky" (t) permission bit. I also implemented FSA_SET_PROTECT at the same time, so now typing protect +s file in AROS acheives the same as chmod +t file in Unix, and vice-versa.

So with that fix in hand, ArosLauncher was rerun and the far simpler error was returned:

ArosLauncher: file is not executable

So the next step was to dig into LoadSeg() and find out why it couldn't load the file.

A tiny bit of background: Any program, library or other "executable" thing under AROS (and most Unix systems) is stored in a format called ELF. It is split into a number of "sections". Each one contains some information. It might be program code, data, symbol names, debugging info, there's lots of different types. Its up to the OS loader/linker to pull all these together into a runnable program.

So, with the ELF specs in hand I started stepping through the loader code, and quickly found the problem. When you compile something with debugging information, it adds many extra sections to the binary object, containing what amounts to the entire source code for the program, so the debugger can give you the proper context and so on. Because it includes all of WebKit, ICU, cURL, libxml and SQLite, it has a lot of sections. Somewhere in the order of 75000 in fact.

The field in the ELF header that stores the count of sections is a 16-bit field, which means it can count up to ~65000. Clearly there are too many sections in the file to fit. In this case, the number of headers is marked as 0, and the loader should try to load the first header. In there is the real count, in a 32-bit field that normally is used for something else (the header size) but is borrowed just for this special case.

So I implemented this, and it works - it finds the headers correctly and does the relocations as it should. Its still not at the point where it will run ArosLauncher. It would appear that there's a symbol type that the AROS loader doesn't know about and is interpreting as being invalid, rather than handling/ignoring it. I'm not sure what's appropriate yet; I'll take more of a look on my bus ride home today.

More todo items: There are three ELF loaders in AROS currently, elf, elf64 and elf_aros. elf is the main one that I'm working on, elf64 is a copy of it taken recently with support for 64-bit symbols, and elf_aros is an old one that I have no idea of what its for or where it came from. I have no desire to make my modifications in three files, particular when I have no 64-bit system to test on, so I'm going to look at trying to merge these three files back together.

monday, 10 december 2007

posted at 14:46
tags:

Just a followup about the whole _GLOBAL_OFFSET_TABLE_ thing. Apparently this symbol is provided by the linker, so it makes sense that it doesn't work since the AROS linker is actually a custom ld script known as collect-aros which doesn't handle position independant code at all.

If we were to ever have real ELF-style shared libraries, this is one thing we'd need to implement. The other thing we'd need is a whole load of stuff in LoadSeg(), which is our "runtime linker".

Nothing to see here, just some notes for posterity.

sunday, 9 december 2007

posted at 11:27
  • mood: sucks
  • music: machinea supremacy - sidology 2

I'm a little stuck. Last night I wrote a trivial startup program to make sure linking and calling into WebKit was working correctly:

#include "config.h"
#include "Logging.h"

extern "C" {

void webkit_init(void) {
    WebCore::InitializeLoggingChannelsIfNecessary();
}

}

int main(void) {
    webkit_init();
    return 0;
}

It compiled fine, but the link failed:

There are undefined symbols in 'ArosLauncher':
         U _GLOBAL_OFFSET_TABLE_

All the bits of information I need to resolve this are scattered around (if they exist at all), but what I've learnt is this. WebKit is compiled with -fPIC, which produces position-independant code. This is what you want when producing an ELF shared library. Essentially what it does is setup an offset table to hold the locations of all the global symbols in the library, and causes the generated code to access those symbols through the table instead of going direct. Later, when the library starts, the runtime linker fills in this table with the correct locations to all the symbols. This allows the OS to place the library anywhere in memory it wants, rather than at the location the library was compiled for initally. This is all great stuff than doesn't make the slightest impact on AROS, as our shared libraries don't work this way. Well, they do conceptually, but thats a topic for another time.

I'm compiling all this code into a static library, but because it was compiled with -fPIC it has lots of references to _GLOBAL_OFFSET_TABLE_. Here's where I'm unsure of what's happening. Either GCC is not setting up the offset table because our patches to make it work on AROS don't enable it (reasonable, since we don't have support ELF shared libraries), or its just implied that if you're linking with a static library you won't need the offset table and are expected to compile with -fPIC. I spent a lot of time last night believing the former, but after being completely unable to find anything in the GCC code that supports this, I'm really starting to lean towards the latter.

Which brings us to the next problem. Currently AROS WebKit is build using qmake, the build system for QT. I chose this because it was the easiest way to get a cross-build running at a time where I had no real idea what I was doing. It would seem that its currently setup to build a shared library, which I'm hacking around at the last stage to make it output a static library. I haven't found an obvious way to disable -fPIC yet.

This highlights the next issue. qmake is not going to cut it going forward. Actually, none of the existing WebKit build systems are really suited to cross-building - its all hacks so far. Before long its going to need a real build system. I'd like to use plain GNU make so that there won't be an issue with compiling the mess on AROS, but there's still going to have to be some stuff copied from the AROS build system to support setting up the Zune stubs, for example. That suggests just using mmake directly, except that I have my reservations about its long-term suitability for anything. The build system is not something I want to debate here, I've said my piece about it elsewhere and I'm deliberately not discussing it until I have time to do my own experiments.

So here I am at a bunch of apparent deadends. I'm going to spend a little more time right now trying to bend qmake to my will, but this whole mess is rapidly getting out of hand. I believe a sigh is the appropriate action at this point.

Sigh.

Update 12:53: Figured out how to turn -fPIC off, and I now get why it wasn't working. I now see logging output on the console, awesome! A better build system is still required.

saturday, 8 december 2007

posted at 18:25

Today marks a major milestone for the WebKit port. It compiles!

-rw-r--r-- 1 rob rob 24782208 2007-12-08 18:25 libWebKit.a

It doesn't do anything yet, but it compiles. I have 298 stub methods across 41 AROS-specific files. Each one calls the notImplemented() macro, which simply prints the name of the method that was called.

The plan of attack from here is to write a tiny main() that tries to create a frame object and hand control to it. That should yield several million lines of output from notImplemented(). I implement those methods, one at a time, until I get something on screen.

Once I get a decent way into that process I should start to gain some understanding of how WebKit is actually bolted together. Once I have that I can start to think about the design of the Zune API.

The real fun starts now. I'm looking forward to writing some actual code rather than just stubbing functions :)

saturday, 1 december 2007

posted at 09:04

This week I've been working on another WebCore dependency, though a little different to the previous ones. To work well, it seems that WebCore needs a threading system. AROS doesn't have one. I think WebCore can work with just its stubs, but I don't want to. I want this done properly.

I started looking at how I might implement threads, and it seemed that the interface was general enough that it could be useful to as a shared library for other things besides WebCore to use. And so thread.library was born.

Its almost ready. Threads work, mutexes work, conditions work. The only thing I'm dealing with is what to do when the main task exits while threads are still running. There's a bunch of bad things that can happen, which I don't have time to go into right now, but the best thing I can do is to simply detach the threads and allow them to continue running. See here for more details, though the description is out of date - the code is now doing option 1, and the ThreadBase issues have been dealt with. The last thing to take care of is a small memory allocation issue that is causing a crash, but once thats done I'll check it in to AROS subversion for anyone to use.

Update 2007-12-03: Code is now in AROS SVN, and will be available for use in tonight's nightly and SDK builds. Be sure to read the README before starting.

sunday, 25 november 2007

posted at 20:50

Late Friday I reached a minor milestone when I got the platform-independent part of WebCore fully compiling and linking. Next up, the tricky bit: the platform dependent stuff, otherwise known as the actual port.

I spent a couple of hours staring at various WebCore classes trying to make sense of them, and eventually I started to get a feel for the structure, though I'm a long way off really understanding it. Basically, WebCore has classes for common GUI elements, like fonts, menus, and so on. To do a port, you have to reimplement these classes to wrap the same functionality in whatever graphics system you happen to be targeting. It was around this point I realised that I know basically nothing about the AROS GUI toolkit, known as Zune.

I had a look around for examples and documentation, and I started to see what was going on, but a lot of the code is a mess and its hard to get a clear picture of what's happening in my head. The only option left to me is to write a small application using some of the Zune features that I'll need to get an idea of what makes it tick.

I thought about it a bit on Saturday and today spent a couple of hours implementing a this little app that I call fonty:

Its a font preview program. You give it a font name and point size, and it'll render some text in that font. We already have stuff like it, so its not particularly useful, but so far I've learnt about the basic structure of a Zune application, how to make a Zune custom widget class (I have a seperate FontPreview custom class), and how the Amiga font system works. It'll soon have a context menu that allows selecting different styles, and changing the text. Again, not really great in terms of usability, but lets me see how everything works. And kinda fun to write too :)

wednesday, 21 november 2007

posted at 06:35
tags:

Michal writes about his continuing pain with the Amiga LONG/ULONG types on 64-bit AROS. Some guidelines for types:

  • If you're writing new code, just use the normal C types, and if you need types of a specific width, look to C99 uint32_t, etc. On AROS, LONG is always 32 bits, even on 64 bit systems. The C type long, however, can be 32 or 64 bits. Don't assume they mean the same thing.
  • Don't use ULONG, BYTE, IPTR, etc except when calling a system API that uses them, and then take care to make sure your type conversion is spot on.
  • The possible exception to this is BOOL, but only ever assign TRUE or FALSE to it, and never explicitly test its value; that is use if (flag), not if (flag == TRUE).
  • Don't store pointers in non-pointer types. If you want a generic pointer, use void *. If you need to convert between an integral type and a pointer, use intptr_t/uintptr_t.
  • Don't do clever bit things with bit fields, like Michal describes for FHF_WRITE. Just say what you mean.

This community service announcement brought to you by the variables i, tmp and foo in the hopes that it helps the general health and wellbeing of people like Michal who have to decipher yours and my bad code by themselves years after it was written :)

tuesday, 20 november 2007

posted at 13:45

I finally finished my dependency porting stuff, with libxml2 coming to life late last night. I haven't tested it properly yet, as its test suite requires glob(), which we of course don't have. I'll look at integrating a version of it soon so that I can run the tests. For the moment I'm totally over dependency porting, and eager to get onto WebKit proper.

Before bed I wrote the first line of AROS-specific code in WebCore. Ready? Here it is:

typedef struct BitMap *DragImageRef;

I have no idea what it does yet, but it was enough to get the relevant file (WebCore/platform/DragImage.h) to compile, and thats all I care about right now.

The build is going well. So far I'm just stubbing platform-specific classes to get the thing to compile. Once its compiled, I'll start implementing those stubs.

One thing that was missing that would have been difficult to work around inside WebCore itself was the reentrant time function localtime_r(). A bit of hacking on the bus this morning and AROS now has this function, along with its friends ctime_r(), asctime_r() and gmtime_r(). Phew.

Tonight's work is adding stubs for PlatformMenuItemDescription, whatever that is :)

sunday, 18 november 2007

posted at 21:16

Today I finished porting cURL, a library for getting things from the internets (or actually, anything with a URL). Its probably the dirtiest port I've done so far, both because the configure script is a mess (it knows enough to know that I'm cross-compiling, but then doesn't know enough about cross-compiling to do anything other than get in my way), but also because of the bsdsocket.library madness (which if you've been in #aros at all in the last couple of days you'll have heard my opinions on).

Obligatory screenshot:

Code at /git/aros/curl.git.

In other news, I committed my mlib patches this morning after a little testing and tweaking by Markus Weiss to get them working on PPC. I'm quite proud of it - it was a big, unknown thing and it came off without a hitch.

So now, armed with the 20071118 nightly (available in just a few short hours), you can (theoretically) possible to built all of the Traveller stuff done so far. If only you all had some build instructions .. :P

friday, 16 november 2007

posted at 21:53
Quick one before bed: a port of [OpenSSL](http://openssl.org/), which is needed for [cURL](http://curl.haxx.se/), which is needed for WebCore.

It was actually a pretty easy port to make. OpenSSL is ported to so many platforms already that it was pretty much just a case of copying stuff from similar platforms. Amusingly, the platform most similiar to AROS as far as OpenSSL is concerned is Netware :)

Code available at /git/aros/openssl.git.

thursday, 15 november 2007

posted at 14:09

The results are in. The browser will be called "Traveller" (that's British spelling, with two ells). I had already thought of this as a potential name before asking for ideas, and when a couple of people suggested it too I knew it was good.

The reasons I like it are threefold:

  • Its a good companion for Wanderer.
  • It carries on the tradition of giving browsers a name related to finding the unknown: Navigator, Explorer, Konqueror, Safari, etc.
  • It references an in-joke among the members of my team at work, so its just a little bit personal too.

So thanks everyone for your input. I enjoyed hearing all your ideas :)

Relatedly, Paul J. Beel asked me a bunch of questions about the project and has just posted my answers over at the AROS Show. That should pretty much cover what exactly it is I'm doing and what you can expect.

I'm looking for someone who can is savvy with graphics to produce some art for the browser - icons, throbber, about screen, etc. I have ideas, but need someone who knows how to produce art and animations to give me a hand. Contact me via email (rob@cataclysm.cx or grab me on IRC (fce2 on irc.freenode.net).

Now that all the excitement and administrivia is out of the way, time to do some actual hacking.

wednesday, 14 november 2007

posted at 16:02
  • mood: coy

Just a few quick updates.

First, thanks all for your name suggestions. I hated some of them, I loved some of them, and I've finally decided on the name. Its one that I had thought of beforehand, but a couple of people suggested it here too. I'm not revealing it yet though; Paul J. Beel of The AROS Show has sent me some questions for an interview and I've promised that I'll reveal the name there. Of course I'll post it here shortly after, but of course you all read his stuff so you won't need it :)

I've started porting the WebCore dependencies. First up is the easy one, SQLite, which I finished porting this morning. Its a horrendous port, with no file locking and hacked up path handling, but it was the cheapest and fastest I could do, and will suffice for what I want. I don't want to get bogged down on tangents; WebCore itself is going to take enough time and brain to do without being distracted.

I'm now publishing my work as I go. The repositories for WebKit and its dependencies will appear at http://cataclysm.cx/git/. Feel free to clone from them and do whatever you want with the code. I'll post some build instructions soon; its quite hairy. I've also put my AROS repository up, which is where I'll publish stuff that hasn't made it to AROS SVN yet (usually becaused its unfinished and/or broken).

Thats all for now. Heading home now :)

sunday, 11 november 2007

posted at 21:01

Ever wanted to name a web browser? Here's your chance. I need a name now so that I have a way to refer to the whole project, rather than "WebKit port" (accurate until I start work on the chrome) or "browser bounty" (duh).

I have a couple of ideas, but feel free to post a comment with names of your own devising. I'll choose the one I like the most, or if they all suck, I'll choose one of my own. Its not a democracy, you know ;)

Update: Name has been chosen. Thanks all for your suggestions :)

saturday, 10 november 2007

posted at 10:07

Its been a big week of AROS coding, with a milestone being reached last night: JavaScriptCore, the JavaScript engine inside WebKit, is now compiling and running inside AROS. As such, I'm satisfied the a full port of WebKit to AROS is feasible, and as such, I've taken on the bounty to produce a browser.

My process for building WebKit has been simple. I made minor changes to their build system to use AROS crosscompilers, and then let it build until it breaks. Then I go in, figure out what died, and fix it. Often this is easy, requiring only some platform defines and such. Sometimes its been a little harder, which is where posix_memalign() came from. The really fun thing happened at the start of the week when the build failed because a couple of math library functions were missing.

Our math library (known as mlib or arosm, depending on where you look, though every other platform calls it libm, go figure) was originally taken from a math library written at Sun way back in 1993, and released for free. We got our copy from FreeBSD in 1999, and it was updated again in 2003. Its missing a lot of stuff though, notably things from C99.

I had a look through the FreeBSD code and found the functions I needed, but on noticing just how much stuff was missing I decided it might be better to do a full refresh of libm. As is usual when I start on something, it rapidly got out of hand.

I had to make a few changes to our core headers to provide all the necessary defines and types and such to make it work. The new code also has an amount of architecture-specific code for using the FPU. Fortunately FreeBSD supports all of the architectures that we have active ports for (i386, x86_64 and ppc), so it was just a matter of getting the right code into place.

In any case, lots of tweaking and merging has been going on such that I now have about 20000 lines changes spread out over 21 patches. I haven't committed them yet as I'm waiting on some build macros from Staf to allow me to build the architecture-specific files into the library correctly. My hacked version seems to work well, and passes a couple of tests from Fred Tydeman's C99 FPCE test suite. I'll run all the tests soon, but I expect them to pass without issue.

Once the patches can compile cleanly, I'll try to get some other AROS devs to review them, as they're big and I'm scared. Once its all deemed good, they'll go in, and we'll be doing fancy math forever. Hurrah!

Anyway, after shoring up the holes in AROS, it was back to JSCore. The code is exceptionally well written, and easy to port. Apart from adding #ifdef here and there, the only actual code I had to write was stuff to help the garbage collector find the stack base, and thats two lines in kjs/collector.cpp:

    struct Task *task = FindTask(NULL);
    return task->tc_SPReg;

The JavaScript engine test program testkjs runs properly. The only issue is that the garbage collector is not fully cleaning up all the objects at script exit, which I think may be a memory management issue. I haven't fully tracked it down, but the folks in #webkit (particularly bdash) have been very helpful and I'm expecting to have it sorted out soon.

So thats my progress so far. My plan for the browser proper is to implement it in two stages. The first is the port of WebKit proper, which is a porting JavaScriptCore and WebCore, writing a trivial launcher application, and porting libraries it depends on and otherwise fixing things in AROS. Once thats done, the second stage begins, which involves integrating WebKit into AROS proper. I haven't thought this through fully yet, but I expect at this point that I'll be writing a Zune widget to allow applications to embed WebKit, and from there writing a Zune application to be the browser proper.

I'll be making my git repositories available shortly, so the brave can track my progress. And you'd better believe that only the brave need apply - you need to be willing to track AROS and WebKit SVN repositories and regularly recompile AROS, gcc and WebKit. Oh, and there's a 20-step build process for ICU as well, one of the WebKit prerequisites. Its early though, this will be made easier after I'm finished so other people can hack on this too.

saturday, 3 november 2007

posted at 12:40
tags:

I just finished implementing posix_memalign(). It will help with JavaScriptCore porting, as its allocator/garbage collector wants to do lots of memory tricks, including unusual alignments. I'll write more about my WebKit porting progress later.

I love doing pointer arithmetic. It spices up C so that I feel like I'm writing Perl one-liners:

int posix_memalign (void **memptr, size_t alignment, size_t size) {
    UBYTE *mem = NULL, *orig;

    /* check the alignment is valid */
    if (alignment % sizeof(void *) != 0 || !powerof2(alignment))
        return EINVAL;

    /* allocate enough space to satisfy the alignment and save some info */
    mem = AllocPooled(__startup_mempool, size + alignment + AROS_ALIGN(sizeof(size_t)) + AROS_ALIGN(sizeof(void *)));
    if (mem == NULL)
        return ENOMEM;

    /* store the size for free(). it will add sizeof(size_t) itself */
    *((size_t *) mem) = size + alignment + AROS_ALIGN(sizeof(void *));
    mem += AROS_ALIGN(sizeof(size_t));

    /* if its already aligned correctly, then we just use it as-is */
    if (((IPTR) mem & (alignment-1)) == 0) {
        *memptr = mem;
        return 0;
    }

    orig = mem;

    /* move forward to an even alignment boundary */
    mem = (UBYTE *) (((IPTR) mem + alignment - 1) & -alignment);

    /* store a magic number in the place that free() will look for the
     * allocation size, so it can handle this specially */
    ((size_t *) mem)[-1] = MEMALIGN_MAGIC;

    /* then store the original pointer before it, for free() to find */
    ((void **) &(((size_t *) mem)[-1]))[-1] = orig;

    *memptr = mem;
    return 0;
}

wednesday, 31 october 2007

posted at 14:10
tags:

I've been working on a few small things over the last week or two, trying to tie up some loose ends.

First, I fixed the long-standing file notification bugs that, among other things, have caused the preferences apps to not be working correctly. Back in May I was doing lots of work on DOS, and I fixed the file notification calls so that they followed the same semantics as they did in AmigaOS. I made a mistake though: I didn't fully implement volume name expansion, such that if you requested a notification on a file in the root of a filesystem (eg RAM:foo), it would be taken as-is rather than having the volume name expanded (eg Ram Disk:foo). This caused ram.handler to setup the notification on a different file to what all other DOS calls (which did expand the volume name properly) would use. As a result, no notifications were ever sent. This didn't come up for fat.handler, that I was also working on at the time, as it does its own name expansion internally. This is all fixed in Subversion revision 27105, and in nightlies 2007-10-28 and later.

Next, I got AROS compiling under Ubuntu. Recently GCC has included a nice feature called "stack-smashing protection". When enabled it causes walls to be placed around the processor stack before a function is called, and then checks that the walls are intact after the function returns. If they're not, an OS-provided function is called to take action, which usually involves killing the offending process. Stack smashing is a good source of security flaws (the classic buffer overflow, for example), so this is a good thing.

AROS doesn't have support for the feature, so the compile will fail if this option is enabled. GCC on Ubuntu enables it by default, so the build would always fail there. I've checked in configure changes that detect if the compiler supports the option and disables it. It took me three tries as there's some complications - the option doesn't exist on GCC 3.x, and we also use the host compiler to fake a cross compiler, so the option has to be disabled via the specs file as well. To get the kinks out I installed Ubuntu into a virtual machine and messed with the config files and rebuilt over and over until the entire tree built. This is available in r27116, nightlies 2007-10-30 and later.

Then, yesterday, I stubbed a few missing clib functions needed by ICU (the major prerequisite for WebKit). Of course they don't do anything, but at least ICU can now link correctly. I suppose at some point I'm going to need to fully implement them, but I really don't want to muck about with character conversion functions just now. Functions mbtowc(), wctomb(), mbstowcs() and wcstombs() are now available in r27120, nightlies 2007-10-30 and later.

And finally, last night I added definitions to our headers to define sig_atomic_t. We don't have working POSIX signals yet, so its kinda pointless, but with the definitions in place libstdc++3 (and thus g++) can now be compiled without needing to hack the AROS headers half way through the build process. Available in r27121, nightlies 2007-10-30 and later.

So thats my brain clear of the odd jobs, so I can now concentrate properly on WebKit. The next step, which I probably won't really get chance to start on until Friday night, will be to get JavaScriptCore compiling. Whee!

sunday, 28 october 2007

posted at 19:20
tags:

Ahh software licensing, a topic I try really hard to avoid. mausle raised a concern with my previous post about ripping code from the Linux kernel for use with AROS:

... you can't just rip code under GPL license out of the linux kernel tree and link it with AROS code. I know there might be a grey zone with modules already loaded by grub.

Its a comment that needs addressing in more depth than a simple reply will allow, so here we go: my position on GPL code in AROS.

Obvious disclaimer: I'm not a lawyer, and have no access to one. I can make some guesses based on my own reading of licenses and precedent in other software projects. Of course, if the FSF or whoever want to tell me I'm wrong, I'm happy to listen.

First, lets deal with an obvious case. Distribution of GPL'd source code alongside other source code with incompatible licenses is no problem. The GPL only has issues with otherly-licensed software when it comes to linking with them and distributing the result.

And now the gritty bit. The way the GPL places requirements on other software is entirely based on the mechanism by which the pieces of software are combined and interact. The GPL itself is vague on this, but the FSF have a FAQ item in which they acknowledge this grey area and provide their perspective on it:

What constitutes combining two parts into one program? This is a legal question, which ultimately judges will decide. We believe that a proper criterion depends both on the mechanism of communication (exec, pipes, rpc, function calls within a shared address space, etc.) and the semantics of the communication (what kinds of information are interchanged).

If the modules are included in the same executable file, they are definitely combined in one program. If modules are designed to run linked together in a shared address space, that almost surely means combining them into one program.

By contrast, pipes, sockets and command-line arguments are communication mechanisms normally used between two separate programs. So when they are used for communication, the modules normally are separate programs. But if the semantics of the communication are intimate enough, exchanging complex internal data structures, that too could be a basis to consider the two parts as combined into a larger program.

In many ways this description is tied to Unix-like systems, which makes sense as that is the context in which the GPL was originally developed. Its reasonable that these guidelines are not in the license itself, but since these aspects have never been tested in court all we have to work with is what the original authors were thinking.

In another FAQ item on plugins, we get more relevant detail:

It depends on how the program invokes its plug-ins. If the program uses fork and exec to invoke plug-ins, then the plug-ins are separate programs, so the license for the main program makes no requirements for them.

If the program dynamically links plug-ins, and they make function calls to each other and share data structures, we believe they form a single program, which must be treated as an extension of both the main program and the plug-ins. This means the plug-ins must be released under the GPL or a GPL-compatible free software license, and that the terms of the GPL must be followed when those plug-ins are distributed.

If the program dynamically links plug-ins, but the communication between them is limited to invoking the `main' function of the plug-in with some options and waiting for it to return, that is a borderline case.

So now we need to look at how software running on AROS is combined.

Lets break these descriptions down from an AROS point of view:

  • AROS modules are either executable programs, communicating via message ports, or libraries of code which are accessed via a vector table.
  • Dynamic linking usually means loading two or modules into memory and then updating pointers within them to arrange for them to directly call into and access each other. AROS modules do not do this; library calls are instead done via function pointers held in a vector table attached to the library, which is fetched via a call to OpenLibrary().
  • If modules (usually executable programs) communicate via message ports, then those message ports are accessed from known locations, and data is sent in a format defined in the OS headers.
  • For all practical purposes, AROS modules exist within the same address space - we have no memory protection. Messages passed between ports are merely pointers to some area of memory.
  • Packet-style filesystems are executable programs, invoked via the equivalent of fork-and-exec.

From this it seems to me that AROS modules that exist standalone (ie loaded from disk) aren't really linked with the OS core itself, and thus can't be considered as part of the same program.

There's a small spanner though when it comes to filesystems: how the filesystem driver is loaded if no filesystem is up and running yet. I think these can be argued around with only one grey area. Lets use a hypothetical GPL ext2 driver as an example.

In the simplest case (license-wise), the AROS boot partition is AFS or something else that is included in the boot image, and you want to have a seperate ext2 partition mounted. The driver would simply be loaded from the AFS partition (say DEVS:ext.handler) and the filesystem mounted that way.

If you wanted to boot from an ext2 partition, then we have two options: including the driver in the boot image, or including it in a seperate module set to be loaded by GRUB.

Background, for the unitiated: GRUB has the ability to load one or more files from any filesystem that it has a driver for and place them in memory somewhere for the kernel to find when it starts. AROS on x86_64 uses this already, with the "core" boot image only including Exec, DOS and a few other absolute necessities and the rest coming from the module set.

These modules are loaded from disk and aren't included in the main boot image. The kernel has to go to a little extra effort to find and use them, so they clearly aren't a dynamic link. I'd argue that there's no grey area: the modules are standalone programs.

As for including the filesystem in the core boot image, that I'll concede as a grey area. However, one could argue that since there's still no linking in between the modules included (the kernel searches through memory for the ROMTag structures at the start of each mdoule), the boot image is actually some kind of archive format (perhaps a self-extracting archive) and therefore still not linked. I think I'd lose that argument, and I won't contest it, but I still find it to be an interesting angle. The point is moot however: as time goes on we'll find x86, hosted and every other platform port of AROS moving the the x86_64 model with a minimal loader and startup image and then rest of the modules (including filesystems) pulled in from disk, whether that be by GRUB or some other mechanism.

So hopefully I've made some kind of argument there. No doubt plenty of people will disagree with me. Good - this is the time to sort this out before I start writing code :P

thursday, 25 october 2007

posted at 10:42
tags:

I've been thinking and poking at AROS a lot this week, so I have heaps to write about but haven't had time. I'll try to find more time today to get it all down. Here's the first installment.

Recently AROS has obtained a new installer, but its still hamstrung by the fact that we don't really have a proper filesystem to install AROS onto. We currently have three filesystem options:

  • AFS: This is our current default filesystem. Its an implementation of the Amiga Fast File System. Our implementation is buggy and frequently trashes the disk. Even if it did work well, it would suck is it just wasn't designed for the massive disks we have these days.
  • SFS: The so-called Smart File System. We have a port of the open-source release that was made a few years ago. It works well enough, and actually performs and scales nicely if you can get it set up. It still has a few bugs, isn't really maintained, and most significantly, GRUB has no support for it, so we can't currently boot from it, meaning that a minimal AFS boot partition is needed, with all that that entails.
  • FAT: The newcomer :) Although noone has done it yet, it should be possible to boot from it. Its stable, and GRUB has support for it. Its two failings are that it has no support for Amiga-specific file attributs (eg script and pure) and it doesn't scale well to massive disks.

So, what to do? We can fix the bugs and stablise the filesystems. We could implement support for SFS in GRUB. We could add extended attributes to FAT, either by making our own incompatible extensions or by using "magic" files. Ultimately though, we're left maintaining our own filesystems. As disks become larger and larger and new innovations in filesystem design appear, we're going to be left behind. We're not filesystem designers. We don't have enough people to commit resources to it. The best thing we can do in this situation is to steal something :)

I propose taking ext2/ext3(/ext4). Literally taking it - ripping it from the Linux kernel and porting it. Here's why:

  • Its maintained by a bunch of people who actually know something about filesystems.
  • It has some advanced features (like journalling, online resizing, etc) and is getting more all the time (see ext4).
  • Its superblock and file structures have space for OS-specific data, so we have a place in the filesystem proper to store Amiga-specific attributes without having to fudge it.
  • It also has support for arbitrary extended attributes and forks, so file comments and other large metadata will have a home.
  • The tools for creating, manipulating and validating filesystems already exist in the form of e2fsprogs, and should port with a minimum of fuss.
  • Read/write support is available on the three major operating systems (Linux, Windows and OS X). Of course they won't know about OS-specific data, but if the defaults are done right it actually won't make a difference for most files.
  • GRUB knows about it, and thus can boot from it.

I've done enough research to know that this can work. I have no immediate plans to implement it, but it'll be something I look at eventually. Of course, if you want to work on it I'm quite happy to help out.

tuesday, 16 october 2007

posted at 21:21
tags:
  • mood: fightery

This morning I was greeted with a PayPal email informing me that as a result of my recent SDL work Team AROS have seen fit to award me the princely sum of $38.92. This brought my latest object of my affections well into range of this weeks budget, so I went down to EB today and picked up a copy of Phantom Hourglass.

As expected, I'm loving it. The graphics are awesome and the control system is crazy good. Yep, descriptive, I know, Just buy it, its good.

So I might be quiet for a few days while I get stuck into it. Don't be alarmed, I'll be back soon :)

thursday, 11 october 2007

posted at 14:56
tags:

Compile stuff is on hold for a little while as I wait for answers to my questions:

If you want to play with this stuff, I'm making my git repository available here:

As the name suggests, the goal is to get a standalone AROS SDK up and running. This will be a long-term project.

If you do try to use this, you'll need to edit $prefix/i386-pc-aros/sys-include/signal.h and declare sig_atomic_t. I have notes from Iain Templeton (our headers guy) and I'll be trying to do this properly sometime soon. For now just uncommenting the existing declaration is enough to make GCC compile.

I don't recommend shipping actual binaries from this yet though. I can't guarantee anything other than brokenness at this point.

For now I've installed 4.1.2 into /usr/local and will now start trying to compile JavascriptCore (a WebKit component). Focused time will be limited for the next week at least because my wife and new baby are coming home in a couple of hours :)

Update 10:00pm: I just got 4.2.2 to build and compile things correctly. The --enable-version-specific-runtime-libs switch to configure takes care of it. I don't know what changed in 4.2, and don't much care at this stage.

monday, 8 october 2007

posted at 18:15
tags:
  • mood: dad

Elizabeth (Beth) Norris was born at 9.45 this morning at Box Hill Hospital. She's just a touch under four kilograms (8 pounds 13 ounces for you dirty imperialists). Both her and her mother are doing just great. Dad is rather chuffed about the whole thing :)

Francesca hasn't met her yet, that will happen tomorrow morning when we're all fresh. We're really interested to see how she goes and if Beth meets her expectations after months of hype. I expect it will be just fine and she'll love her little sister.

Obligatory happy snaps:

sunday, 7 october 2007

posted at 10:54
tags:
  • mood: compiling

Quick one. Short story: I want to port WebKit to AROS. It needs GCC 4 to build. It uses C++. AROS doesn't have any of that available.

I've just finished refreshing our GCC patches to get g++ and libstdc++3 cross compilers from GCC 4.0.3:

rob@plastic:~/code/aros/gcc$ ls usr/bin/
collect-aros            i386-pc-aros-gcc-4.0.3  i386-pc-aros-nm
i386-pc-aros-addr2line  i386-pc-aros-gccbug     i386-pc-aros-objcopy
i386-pc-aros-ar         i386-pc-aros-gcj        i386-pc-aros-objdump
i386-pc-aros-as         i386-pc-aros-gcjh       i386-pc-aros-ranlib
i386-pc-aros-c++        i386-pc-aros-gcov       i386-pc-aros-readelf
i386-pc-aros-c++filt    i386-pc-aros-gjnih      i386-pc-aros-size
i386-pc-aros-cpp        i386-pc-aros-grepjar    i386-pc-aros-strings
i386-pc-aros-fastjar    i386-pc-aros-jcf-dump   i386-pc-aros-strip
i386-pc-aros-g++        i386-pc-aros-jv-scan
i386-pc-aros-gcc        i386-pc-aros-ld

I've only compiled basic "hello world" programs for C and C++ yet, but everything seems to be working properly I'm not totally happy with the setup yet - in particular you have to explicitly setup paths to help it find collect-aros and libstdc++, so I'll need to fix that. Also its currently only GCC 4.0.3. I'll soon be patching GCC 4.1.2 and GCC 4.2.1, then do some test releases.

And yes, that is gcj you can see up there, though we'll have to port GNU Classpath before it can be useful :)

thursday, 4 october 2007

posted at 23:18
tags:

I committed sdl.hidd today. Its not finished by my own standard but I'm pretty much out of time and frustrated by the fact that to make it go any faster I have to keep copying code from the bitmap baseclass. Better to commit it now, let people play with it, and work to fix the driver interface instead.

I can't remember if I wrote about it previously, so here's the summary. The bitmap baseclass implements its methods by repeatedly calling GetPixel() and PutPixel() in the driver proper. This is reasonable; that way a graphics driver only needs to implement those two methods to get something on screen.

The problem is that in many setups, its necessary to do a second "flush" operation to actually get any changes to appear on the screen. I don't know if this is a problem with real hardware, but it at least has to be done with X11 and SDL. This sucks - with X11 (and for me, SDL, since I develop with it on top of X11) that means a full request/response into the X server. This makes large operations like area filling very slow, as every pixel gets flushed individually.

The way around this, obviously, is to override some the of the higher-level operations to do their work and only flush once. This sucks though if the underlying hardware/library does not actually support the operation natively. At this point, you're left with two options - copy code from the baseclass implementation but have it operate directly on the framebuffer, with one final flush at the end, or don't implement it at all and just take the slow fallback.

If there was only a couple of functions I wouldn't mind copying code from the baseclass, but pretty much every method gets passed something called the graphics context. This contains all sorts of information about how the driver should perform the operation. Should pixels be opaque or transparent, should lines be full, dashed, etc, should area fills be solid or patterned, etc.

x11gfx.hidd can go fast because because X11 too has a concept of a graphics context which has largely the same semantics (in fact I suspect the concept was copied, given that X11 was the first graphics system AROS used), so X11 can accelerate nearly all graphics operations it receives. (Notably one that it can't handle directly is BitMapScale(), which was horribly slow before I fixed it).

Alas for SDL, which has no idea about graphics contexts; indeed, it (by design) has no drawing primitives at all. Libraries like SDL_gfx exist to help with this, but they don't do enough to be useful.

I don't want to implement my own drawing primitives and context stuff, because the baseclass implementation is prefectly good. Its just hamstrung by the fact that plotting millions of individual pixels takes a lot of time, because of the flushes. So I began to look for a way around this.

SDL has the right idea here. All operations you perform on a surface won't appear on screen immediately. It takes a call to SDL_UpdateRect() to make stuff appear. It seems reasonable to add something similar to the bitmap interface. The baseclass implementations would do their plotting and call the flush method when they're done. The baseclass implementation of this method would simply be a no-op, and existing drivers would not implement it, so they'd continue to work as normal. For something like SDL, it wouldn't implement any flushes in PutPixel(), but save all its updates for the flush method (which I've called UpdateRect() also, because it seems to make as much sense as anything else).

The only problem with this is that if you really really wanted to put a pixel (ie somewhere up in graphics.library), then you have to do a 1x1 rectangle flush. I don't consider that a problem really - if you're doing single-pixel plots up at higher levels you're almost certainly doing it wrong.

Finally, every method should be implemented this way. That is, if a driver implements the UpdateRect() method, it should not do flushes anywhere else.

I've already started work on this - I have UpdateRect() stubbed and (no-op) implemented in graphics.hidd in one of my private trees. Next I have to modify all the baseclass fallback methods to call it, then modify graphics.library to call it also, and finally, change sdl.hidd to take advantage of it. Once thats done I should be able to delete a large amount of code from sdl.hidd.

The only thing to realise is that these fallbacks will still be slower than native operations, but the overhead will be the normal method call overhead for each GetPixel()/PutPixel() call, not the flush. Thats a good thing.

tuesday, 2 october 2007

posted at 19:32
tags:
  • mood: circular

Not much to report over the last few days, but I have been hard at work. sdl.hidd is functionally complete, the code just needs a little cleaning and commenting before I'm ready to release it. What I have been working on since Friday is to get the existing source tree ready to accept it. The major piece of that has been to convert x11gfx.hidd to run off the back of hostlib.resource.

This turned out to be far more difficult than I had thought it would be. x11gfx.hidd requires four libraries: libX11 for core functions, libXext and libc for shared memory extensions (optional), and libXxf86vidmode for fullscreen support (again, optional). So hostlib.resource gets quite a workout.

The real difficulty has come from bring X11 headers into the same files as AROS headers. For example, extensions/xf86vmode.h defines BOOL and BYTE, which are also defined by AROS' exec/types.h, though incompatibly. It has been quite an effort to get all the pieces working together happily, but I seem to have got there.

The next thing to update was way the driver was built. Previously it existed in the hosted "ROM", whereas now it must be compiled standalone. Because of the use of hostlib, there's now no need to link against X libs, and in fact they're not even required, so configure need some serious work to make this happen. That file was quite poorly structured, but I've at least cleaned up part of. There's lots left that can be done to make it good.

Once all this was done I had to test to make sure that in changing the build setup I hadn't broken the other architectures. I did a successful pc-i386 build with the updates in place, then installed FreeBSD into a virtual machine to test its port. It didn't build past MetaMake, so I tried pristine sources and found the same thing. It seems the FreeBSD port is broken independently of my changes. I've done my best to make sure that my changes at least won't contribute to the breakage if anyone ever tries to bring it up to scratch.

Here's the diff according to Git. There's been a couple of small tweaks since I dropped the code but nothing significant. If you are able to build hosted, please give it a try and let me know how you get on. These changes are complicated and although they work well here its likely that I've screwed something up.

I do find it amusing that in order to commit sdl.hidd - that I hope will one day soon obsolete x11gfx.hidd - I've had to learn the X11 stuff inside and out to make it possible for it to work together with SDL. And for even more irony, remember how my original motiviation for this was to get a "real" mouse cursor for FFE, but I couldn't figure out how to make it work in x11gfx.hidd. Well I now understand the X11 stuff well enough that I could implement it if I wanted to. Figures.

Next up is the sdl.hidd commit, which I think will happen on Thursday at my current rate. I'm happy that I'm going to hit my target, which was the end of this week :)

wednesday, 26 september 2007

posted at 22:23
tags:
  • mood: blitblitblit

On the way home today I started poking at the bitmap class, looking for fallback methods that were calling PutImage() or PutPixel() multiple times, triggering multiple flushes and slowing things down. I found that most images are drawn by a function called PutAlphaImage() which, as the name suggests, blits a rectangle to the bitmap taking the alpha channel on the source data into account. The fallback method does its usual line-at-a-time operation, applying the alpha itself. As usual, works, but is slow.

The one thing I didn't want to do was copy the superclass code wholesale, just changing the PutImage() and GetImage() calls to direct pixelbuffer access. It would work, but that kind of code duplication is really bothering me (and I'm considering a more permanent fix for that problem, which I'll write about later). So I started to read through the SDL documentation to find out what it could do with blits and alpha things.

The way I ended up implementing it was to create a SDL surface out of the source pixelbuffer passed to PutAlphaImage(), using SDL_CreateRGBSurfaceFrom(). This function is pretty simple - you pass a pointer to the raw memory data, the dimensions of the data, and the RGB and alpha masks, and you get a surface back. For PutAlphaImage(), the masks are fixed, so they can be hardcoded. Once the surface is obtained, it can be blitted to the target surface using SDL_BlitSurface(), and then discarded. Creating a surface from existing data is an exceptional lightweight operation, as the original data is used - no copying done. Freeing the surface leaves the original data intact, so really its just allocating and destroying a SDL_Surface.

By letting SDL do the copy, you get all the benefits of the SDL video engine, which at is core is a hand-optimised blitter, with special SSE2/Altivec/etc versions that get used where appropriate. Basically, its faster than any code I'm ever going to write, and it shows - icons and decorations, the two big users of PutAlphaImage(), now appear instantly.

So I committed that and went looking for more speedups. I noticed that windows were drawing a little more slowly than I liked. When a window appears, the window outline is drawn first, then the theme pixmaps, scrollbars, etc blitted over the top. The outline draws a line at a time (which you can see with debugging on), the pixmaps go fast due to the above changes. I traced this code and, as expected, found multiple calls to PutImage(), but this time they were coming from .. PutImage() itself.

This threw me for a moment until I looked at my PutImage() implementation. Currently it does what most of the other drivers do. It checks the pixel format for the data to be blitted, and if its Native (same format as the target surface) or Native32 (same format as the target surface, but with every pixel in 32 bits so they need to be "compressed" as they're copied to surfaces with lower depths). Anything else gets referred to the line-at-a-time superclass method, which will do the appropriate format conversion. This is what was happening in this case.

My great revelation was that SDL does pixel format conversion natively when blitting, and its almost certainly going to be faster than graphics.hidd can do, even without the line-at-a-time overhead. All I have to do so is supply the appropriate masks for the pixel formats, which are easily obtained from the bitmap object's PixFmt attribute.

Time to stop waffling and write some code :)

wednesday, 26 september 2007

posted at 15:55
tags:
  • mood: sdl

I have no motivation to blog at the moment, but I'm getting lots of code done. Latest screeny:

I'm opening windows and typing things. In other words, mouse and keyboard are working.

There's a bit left to do - implementing a few more bitmap methods to speed up the graphics (fancy things with gradients and alphas, like Zune, are still sluggish) and fixing up mode selection stuff, then doing a bit of code cleanup and reshuffling, adding some comments, etc - then its done! I expect to be committing it next week sometime. First I have to fix up the build system to support it and move x11gfx.hidd to use hostlib, so you can take one without the other, and build for SDL without requiring X on your system.

sunday, 23 september 2007

posted at 10:41
tags:
  • mood: content

In keeping with my recent theme of showing a picture with no actual content, here's my results from five minutes ago:

Yep, thats a fully-working SDL video display. Its a little slow as I haven't implemented all the fast blitting methods yet, they will come soon though. Seeing this in all its glory makes me happy.

I'll try to write later about some of the details of this code. I just wanted to throw something up quickly as I'm out for the rest of the day. If you're really interested, I'm posting screenshots ever step of the way. more often than I blog about them. Feel free to follow along.

saturday, 22 september 2007

posted at 01:00
tags:
  • mood: zzz

With a few minutes to kill after my shower and before sleep, I hardcoded the masks and shift values to get this:

Obviously still has bugs, but now I can see what I'm doing. Goodnight!

friday, 21 september 2007

posted at 23:59
tags:
  • mood: green

So after a couple of days of wondering why every pixel I got handed by graphics.library was either 0x0 or 0x1 (ie black or so close to black that it might as well be black), I looked at my pixel format setup code and on a whim, removed the stuff where I tried to compensate for SDL and AROS colour component masks being different and used them as-is. This is what happened:

Which I guess means that green is somewhere in the middle and thus coincidentally has the same mask, but red and blue are toast, and my fixup code was totally wrong. Some weeks I'm completely moronic.

wednesday, 19 september 2007

posted at 09:16
tags:
  • mood: satisfied
As seen on a morning bus trip:

Pixel format conversion isn't implemented yet, so every pixel is white. The weird thing in the top corner is the mouse pointer, and the other bits are disk icons with their names underneath.

monday, 17 september 2007

posted at 22:27
tags:
  • mood: wtf

Read this comic. Go on, I'll wait.

There's something the comic doesn't tell you. The process that makes photosynthesis happen is exactly the same process was that used to design and implement the graphics hidd interface.

SDL work continues. I have it at the point where its creating both on and off-screen bitmaps, though there's a third type ("non-displayable") that I haven't done yet. I can see calls being made to my bitmap PutPixel() methods, so I know that something semi-correct is happening. As yet though, drawing isn't implemented, and AROS still crashes on boot because I haven't written everything yet.

The development process for this has pretty much been:

  • Stub some functions/methods.
  • Run AROS in the debugger.
  • When it crashes, find out where, and go and look up the corresponding source file.
  • Break my head against the poorly structured and mostly uncommented code within.
  • Look at the existing hidds to figure out how to implement it, but give up because all the native hidds are based on vga.hidd and the only non-native hidd, x11gfx.hidd has piles of cruft left over from the days before layers.library, where every AROS window had its own X11 window.
  • Get a vague sense of what's going on.
  • Implement enough of the function to stop the crash happening, even though the code itself is probably incorrect.
  • Rinse, repeat.

So things are moving at a glacial pace, but at least they're moving, which is something.

The hidd interface works well enough, but is really weird in some places. For example, when graphics.library wants to create a new bitmap (which is the lowest-level structure in the graphics system), it calls Gfx::NewBitMap(). Confusingly, this method doesn't create and return a bitmap, but rather returns a reference to a bitmap class that can handle the type of bitmap the caller requests (displayable, hardware, etc). The caller then instantiates that class to get its bitmap. This is rather peculiar from an OO standpoint.

Oh, I've just had an epiphany about the bitmap classes. All the existing hidds implement an "on-screen" and an "off-screen" bitmap, which are basically the same but with slightly different initialisation. Most of the common functions are in a single bitmap_common.c file which is #include'd into the class source (a horrible idea no matter where you see it or what the justification).

The on-screen bitmap constructors typically make the bitmap appear on the screen as well, which has really been confusing me as there's also a Gfx::Show() method that gets passed a bitmap to display. This wasn't making sense. What if two on-screen bitmaps were created? Would their constructors cause them both to be displayed? What if an off-screen bitmap is passed to Show()? What if steak knives were actually made of cheese?

Anyway its just now clicked. The distinction between on-screen and off-screen bitmaps is entirely internal to the hidd. One of these types is chosen based on the values of the Displayable and Framebuffer bitmap attributes. For SDL though, they're pretty much all the same. I don't need a seperate class for each. So all thats needed is to create a bunch of bitmaps, and when Gfx::Show() is called just arrange for that one to be shown.

That last point is slightly more complex. Under SDL, you have a single on-screen surface, and then as man off-screen surfaces as you like, which you can blit to the screen. You can't choose to display an off-screen surface on a whim, you have to blit it. So what this means is that when Gfx::Show() is called on a bitmap, I have to make sure that the current surface matches the bitmap's resolution, and recreate it if not. Then we make a note inside the bitmap object that it is currently on-screen.

When something makes a change to a bitmap, this flag is checked. If the on-screen bitmap is being written to, then the update must be followed with a blit to the current surface. I haven't tested this yet, but I think the idea is sound.

I estimate another four hours of code before I have some graphics displaying. One day, when all this is finished, I'd like to write a "how to write a graphics driver" doc, and/or fix the damn interface. Yeah, right.

wednesday, 12 september 2007

posted at 18:45
tags:
Another quickie: `hostlib.hidd` in action:

What this is is a test program that uses hostlib.hidd to load libSDL.so, get function pointers for SDL_Init(), SDL_SetVideoMode() and SDL_Quit() and then calls them to open a window (that big empty one at bottom-right).

The debug output you can see in the bottom corner is just method calls and their arguments so I can follow the flow:

[hostlib] HostLib::New
[hostlib] HostLib::Open: filename=libSDL.so
[hostlib] HostLib::GetPointer: handle=0x0826eaf0, symbol=SDL_Init
[hostlib] HostLib::GetPointer: handle=0x0826eaf0, symbol=SDL_SetVideoMode
[hostlib] HostLib::GetPointer: handle=0x0826eaf0, symbol=SDL_Quit
[hostlib] HostLib::Close: handle=0x0826eaf0
[hostlib] HostLib::Dispose

There's probably a million things you can do with this code. I'll be using it to write the SDL hidd. I wonder what others will come up with.

tuesday, 11 september 2007

posted at 09:15
tags:
  • mood: git

I'm liking git. Easy branching and merging is wonderful. And gitk is just sex:

monday, 10 september 2007

posted at 09:33
tags:
  • mood: hacker

So after the compiler shenanigans of last week I finally managed to write some actual code on Friday. I started with just calls to SDL_Init() and SDL_Quit(), but the compile blew up in my face. The problem came from the fact that I was linking with -lSDL, which would have been fine except that AROS has its own libSDL for SDL apps running inside of AROS. The linker found that first, which is entirely not what was wanted, though even if it had found the right one I guess we'd be looking at namespace clashes for anyone who wanted to run a SDL app inside AROS.

After a bit of thought, it seemed to me that the only way out was to not link to the system libSDL at all but instead load it runtime using dlopen() and friends. This can work but isn't without its problems, as loading a library is not the same as linking.

When you write code, you call lots of functions that exist somewhere than other in your .c file. When you compile your .c file, it leaves placeholders for all the functions in the resultant .o file. Linking is the process of pulling in a pile of objects (.o), object archives (.a, also known as static libraries) and shared libraries (.so) and updating all the placeholders to point to the right bits of code.

When you link with a shared library, the link process replaces the function placeholders with stubs that refer to a library file that exists on disk somewhere. When you run the program, a program called the runtime linker (known as ld.so on Linux) looks through it, finds all the stubs, loads all the needed libraries and then fills in all the pieces to make a fully working program.

The idea is simple. By not having to carry a full copy of every required library with every program, program binaries are smaller and so use less disk space. Additionally, its possible for the runtime linker to only keep a single copy of a shared library in memory and point all programs to it, so you save memory when there's lots of programs running. The downside to the whole mess is the increased complexity in linking, the runtime linker needing to find all the pieces (/etc/ld.so.conf, LD_LIBRARY_PATH and ld's -rpath option), the fact that programs can't be as easily copied around because they have libraries that they need, etc. You don't notice this most of the time because we have smart tools to take care of all this stuff.

So back to AROS. dlopen() is not a linker. It merely opens a shared library and allows you to get at pointers inside it. You can obtain a pointer to a function, and then use that pointer to call the function inside the library. So this is possible:

    void *handle = dlopen("libSDL.so", RTLD_NOW | RTLD_LOCAL);
    void *fn = dlsym(handle, "SDL_Init");

The problem here is that the library does not contain prototypes, so we have no idea how to pass arguments to the function. We could build the stack by hand (assuming we knew the arguments), but then you don't get the benefit of the compiler doing type and prototype checking.

The normal home for prototypes is in the header files that come with the library. The problem here is that they define functions as real "first-class" functions. If we used them, it would cause the compiler to leave a placeholder for the function which would never get resolved because we never link -lSDL. Thats a build failure. Obviously though, we need the headers as they have all the prototype information, as well as other things we'll need like structure definitions.

Another problem we have is that we're going to need many, many functions from this library. libSDL has almost 200 functions. While we won't need all of them we can expect to need a fair few, so we need prototypes and calls to dlsym() for each one.

All this really has to be bruteforced. The method is to create a giant struct which has space to store many many pointers, and then, for each wanted function, call dlsym() and populate the list. Function pointers can be declared with the same name as a first-class function (as they're not in the same namespace) and with a prototype. An example is SDL_SetVideoMode, which has the prototype:

    SDL_Surface * SDL_SetVideoMode (int width, int height, int bpp, Uint32 flags);

We can create storage for a function pointer with the same prototype like so:

    SDL_Surface * (*SDL_SetVideoMode) (int width, int height, int bpp, Uint32 flags);

Once we have a struct with all the function pointers declared and initialised, then we'd call a function in it like so:

    struct sdl_funcs *funcs = &lt;allocate and initialise&gt;;
    funcs->SDL_SetVideoMode(640, 480, 16, 0);

The "allocate and initialise" portion of that is a loop that runs through all the function names (stored in a big array), calls dlsym() on each and stows the returned pointer in the struct.

All this is heaps of setup, but it works very well. To help with the setup, I've written a script called soruntime. It takes a shared library and one or more header files as input. It scans the library (using nm) and extracts the names of all the functions that the library provides, then expands the headers (using cpp -E) looking for prototypes for those functions. Once it finds them, it outputs a header file with the library struct (ie all the prototypes), and a code file that has functions to setup and teardown a library.

I'm currently integrating this into my source tree for the SDL HIDD. It could (and probably will) be extended to the X11 HIDD as well, which will provide some uniformity and make it so that if we ever do get an X server ported to AROS, there will be no clashes.

Another thought. With a HIDD that provides facilities for a AROS program/driver to ask the host to load and provide access to a shared library, the graphics HIDDs would not have to be compiled into the kernel anymore and instead could just be standard pieces "inside" AROS. If the UnixIO HIDD was extended to provide better file access features, the other HIDDs (parallel, serial, and the emul filesystem handler) could be modified to use it and thus also be moved into AROS-space. This gives a tight kernel with basically no dependencies. I've started stubbing a hostlib.hidd which will expose dlopen() and friends to AROS for just this purpose.

saturday, 8 september 2007

posted at 13:34
tags:
  • mood: stats
Allow me to reveal part of my motivation for getting a public AROS repository available:

Ohloh is some sort of social networking site for open-source projects and contributors. It works by analysing the complete source history of as many open-source projects as it can get hold of, then building links between people and the different projects they've contributed to.

Its fascinating looking at the graphs that it generates (try the Contributors tab), particularly for a large project. The commit history graph is funky (as seen on my contributions to AROS and jabberd2).

I'm not really a fan of these kind of sites, but this one has me intrigued. I hope more projects I'm familiar with start to get mentions in here.

friday, 7 september 2007

posted at 23:22
tags:
  • mood: cautiously optimistic

At the start of the week I began writing a SDL HIDD for AROS. Currently it does nothing, just prints a debug message when its init is called to show that its compiled and running. This was working on Sunday night.

On Monday I started modifying the build system to support disabling the X11 HIDD in favour of the SDL one. My plan is that you'll be able to compile in one or more HIDDs for hosted, and select between them using a command line switch. No more X11 dependency if you don't want it (and if you were doing, say, a Windows or OSX port, you don't want it), and no more hidd.prefs, which is retarded. I finished the build stuff on Tuesday.

Once I'd confirmed it was working properly, I then recompiled with the X11 HIDD to ensure I hadn't broken it. Something strange happened. The kernel booted, but then startup failed with a "illegal instruction".

I figured I'd made some silly mistake (as we know, the depths of AROS contain much deep magic and many dragons), so I gradually backed out my changes, one at a time, testing as I went. No change. I tweaked and experimented over Wednesday and Thursday, with no luck. Finally, in desperation, I pulled a fresh untouched tree from Subversion and built it. It crashed.

The nightlies were working problem, so that pointed to a problem with my build environment. After some discussion with aros-dev and some poking, I finally found out today that GCC 4.2.1 is producing broken code. GCC 4.1.3, which is what the nightlies are compiled with and what I've now reverted to, works properly. I don't know if its an actual GCC bug or if AROS code is actually wrong but used to work because of some edge case. AROS has some pretty spooky macros which could very well be at fault.

For the moment I'm happy to sit on the older GCC. I've lost days on this, and I'm just glad its over. I'm looking forward to getting some code written now.

Something I have been able to do while waiting for endless builds to complete is to read Git docs. I really like the look of it and am eager to give it a try for AROS development. I've decided I will develop the SDL HIDD using Git, so I should get the chance to see it in action. I'm not yet sure how to commit from Git to SVN yet, but I'm sure I'll figure it out soon enough.

I've also made a public Git repository of the AROS sources available. These are available here: http://cataclysm.cx/git/. They're updated from AROS Subversion every hour on the hour. Feel free to clone and pull from these repositories; I have bandwidth to burn.

This weekend should be a good one for code, I hope.

monday, 3 september 2007

posted at 15:43
  • mood: shiny

Had a great Father's day weekend. Saturday I went out and bought my AVR and a 74HC573 for the memory latch. I have a couple of 8K RAMs that I picked up on eBay last year and some "ladder" LEDs and other interesting lights, so I should now have everything I need to start experimenting. I'm short a power supply though: it'll be a race to see whether I hack up an old plugpack or drive over to Rosanna to pick up my bench supply from my mate's place.

Sunday I awoke to Francesca awkwardly trying to climb into bed holding her Father's Day loot. I helped her up and she helped me unwrap a copy of Settlers DS (a port of Settlers II). Its got some pretty lousy reviews, and I can see why - the interface is clunky, the gameplay is sluggish and there's obvious bugs. Its still Settlers though, which was a game I was addicted to back in the day, so I'm happy. The girl also gave me a nice picture book about a Daddy bear and his kid bear and all the things they do together, and we had a great time reading it together. I do like being a Dad :) Today I found this presentation about Git, which I've been hearing lots about but decided was too much of a leap away from Subversion for my brain to handle. At the time I opted for SVK instead, and I love it, but lately I've found its starting to run out of steam which seems to be tracable back to its Subversion roots. The presentation was fascinating and enough to convince me that Git is worth my effort, so right now I have git-svn running to pull in the AROS repository. It won't be done before I go home so it'll probably be tomorrow before I can really experiment with it properly. I hope its a good as everyone claims.

saturday, 1 september 2007

posted at 09:14
tags:
  • mood: grumpy

So I spent my bus rides and my evening yesterday writing a whole new mouse input setup for FFE. It works properly, in that pressing the right mouse button stops the system receiving mouse events. As is typical for AROS, there's a problem. Admittedly its specific to the hosted environment but thats where I live so its frustrating.

The input system for AROS (and AmigaOS) is layered. At the bottom is the hardware - keyboard, mouse, game ports. Above that is input.device, which hits the hardware (actually it has drivers to do this, but lets just say it does). Higher layers register with input.device and arrange to be called when events come in. Each thing registers with a priority, and when an event happens, the thing with the higher priority gets informed of it first, then the next, until they've all been called. The higher levels can modify the received events so the lower ones don't see them.

Its worth noting that the Commodities Exchange registers with priority 56, with Intuition at 50. console.device is at 0, so it picks up the dregs. CX appearing before Intuition is how it is able to intercept mouse clicks and do fancy things.

On native, Intuition is responsible for managing the mouse pointer, moving it, etc, so if you stop it from receiving mouse events (eg by doing what I do with FFE with a priority 100 handler), it doesn't move. On hosted though, its a different story. X11 controls the mouse there, and the hosted environment fakes mouse hardware stuff to tell input.handler what's happening. The mouse continues to move though - there's no way to stop it.

This is incredibly frustrating, of course. AROS should do what other emulation things with their own mouse (eg VMWare) do - capture mouse input from X, releasing it only when some magic key combination is pressed.

To make this happen means hacking on the X11 HIDD, which is some of the worst code ever. So as usual, implementing some tiny feature (making my spaceship fly properly) means learning and rewriting some major OS subsystem. And people wonder why AROS is hard for devs to get into? All I want to do is write my app. I don't want to have to fix every damn mistake in the OS to get there.

Yes, this is something of a rant. I've been here before though - remember that whole DOS trip I went on a few months back?

So FFE is backburnered for a little while, and I'm considering writing a new HIDD based on SDL (I stubbed one a while ago). If I did, it would be clean and pure and incredibly well documented so that graphics HIDDs aren't deep magic any more.

thursday, 30 august 2007

posted at 23:37
tags:
  • mood: pointy

So my interest for this week (I'm fickle so you can guarantee that next week I'll be doing something else) is porting games to AROS. My focus therefore has been on two things: getting an up-to-date C++ compiler working, and hacking on the JJFFE port.

A C++ compiler is needed if I'm going to port Battle for Wesnoth, but also if I'm going to have a shot at getting a WebKit port happening (yes, I haven't forgotten). The last C++ compiler we seem to have available is GCC 3.3.1, which is quite old now. I'm attempting a port of the latest release of GCC, which is 4.2.1. The C compiler seems to be fine; just some minor changes to the already-extant 4.0.0 patch we have. I'm having some trouble regenerating autoconf files though - seems that GCC is very finicky about its build environment, and I'm not meeting its high standards yet. I will keep at it - its a good side project because it mostly consists of make a couple of small tweaks then waiting half an hour while the build runs and fails. If I succeed, then I'll be trying to keep the patches updated as each new version of GCC comes out (assuming I can't convince the developers to accept the patches into mainline).

Meanwhile, JJFFE has been getting some love. So far I've cleaned up the existing code and properly merged Kalamatee's window scaling code and the other changes he made, which were great work but left the source in a real mess. Things are now looking much nicer so its time for some features.

In FFE you orient the ship by moving the mouse while holding down the right mouse button. In the original version and the SDL and Win32 ports holding the right button causes the mouse pointer to disappear and be locked in place. Currently in the AROS port this doesn't happen - right hold still moves the ship around, but the mouse pointer moves too and if you move it out of the window then all the movement stops.

I've been scouring docs, newsgroups and forums for an way of disabling the mouse pointer and have come up with nothing. This evening I figured out a way that I think will work. The idea is simple - when the right button is clicked, open input.device and eat up all the incoming mouse events before Intuition can get hold of them. I'll have to process raw mouse events myself, but that shouldn't be too hard. When the right button is released, I remove my fingers from the pie and Intuition continues as normal. A couple of calls to ClearPointer() and SetPointer() should provide the vanishing pointer. I'll have a crack at implementing this tomorrow.

Its really nice to be writing proper code again and nutting out tricky stuff. I missed it.

monday, 27 august 2007

posted at 14:10
  • mood: distracted

Another week of not much. The weather is glorious at the moment; yesterday I spent a couple of hours outside mowing the grass, which is a pretty huge undertaking. It looks fantastic and has got me motivated to tidy the rest of the garden and finally get the garage sorted out, which I'll probably try to do a bit in the evenings this week, particularly if it stays warm(er) after the sun goes down.

I've finished reading the AVR book, and have most of the design for the graphics driver done in my head. I'm pretty much settled on the ATmega162 for starting out, as it should have everything I need - too much program memory, plenty of internal RAM, a JTAG mode and enough pins to hook up an external SRAM. Jaycar have them for $20, so I have a cheap supply without having to do crappy mail-order stuff. I still have to sit down and write down the whole design to produce a parts list, but once thats done I can go shopping. The plan is to do that on Saturday morning, taking the girl to Ringwood on the train. She's been begging for a train ride for a while now, so that should take care of both.

Gub recently our services to MOPS Australia (of which she is the coordinator of group at our church) to bring their website out of 1996. To this end I've installed Joomla! and am trying to learn a bit about what makes it tick. PHP is horrible, and the community is weird, but it looks like I'm not going to have to write too much code, which is good - this really has to be off the ground before the baby arrives.

I got my AROS tree up to date and building this morning, as I really need to write some code again soon, my brain is ready for it I think. At the moment I'm just fiddling, getting a feel for the code again and seeing if there's anything that I really feel like playing with. I'm not committing to anything yet, nor am I soliciting suggestions, gentle reader ;)

Back to work. We're on a tight deadline. Don't ask.

tuesday, 21 august 2007

posted at 09:17

Third week without writing any code has been and gone, so very little to report or show. I've been researching this hardware stuff. I have a reasonably good book from the library about the AVR (Programming and Customizing the AVR Microcontroller). Its not brilliant but is good for a skim through. The library didn't have anything better. I've also printed off a bunch of datasheets which look very comprehensive and will take me ages to get through.

The other book I grabbed (on recommendation from the 6502.org forum) is The Art of Electronics by Horowitz and Hill. I've only read a few pages of the first chapter, but already I can tell this book is exceptional - incredibly detailed but gently takes you from fundamentals through to hard stuff without an equation in sight. I'll study it a little over the next couple of weeks but I have a feeling I may spring for a copy, and thats rare - I never buy reference books because the internet is usually so much better. This book should bring me up to speed on the fundamentals and fill the many gaps in my theoretical knowledge, which I'm really starting to feel.

I zoomed over to Jaycar on Friday and bought their AVR programmer kit, and built it on Sunday. The board was a cinch, but I suck at cables so I'll have to buy a few new connectors to replace the ones I broke. Haven't tested the circuit yet; I need an AVR and its associated board and whatever else, and some code written, before I can try that. This is why I'm buiing (mostly) prebuilt tools - designing something like this is too hard right now, as I'd have no real way to tell if it was working or not.

This project is going to be very slow, as I can only buy parts just after payday, which comes around every couple of weeks. Thats probably all right though - I'm having some serious pain in my right shoulder, which is aggravated by using the laptop, so I'm planning to spend my bus trips for the next little while catching up on my reading, and spending any computer time in the evening working on some web stuff (that I'll write about later, maybe today even).

saturday, 11 august 2007

posted at 14:09
  • mood: unfocused
  • music: jch - carcrash

I feel like I should write something, though I haven't really done much lately so this is going to be boring.

I went back to work this week after a wonderful week off, and I really didn't want to be there. Its been a frustratingly slow and boring week, but it got better towards the end. Next week should be much better as we're finally ready to begin installing production servers rather than treading water like we have been for the last few months.

I haven't written any code in the last week, mostly because work has been so draining. My bus trips have consisted solely of playing Advance Wars, which is still as good as ever although its starting to get harder and I'm really having to think hard, which means I lose a lot on a week like this one.

I still need to fix up the circle routine. I bought a nice pad of graph paper and when I feel like it I'll sit down and work out the maths again from first principles, as I don't fully understand Steve Judd's algorithm. I still see no reason why a convincing tunnel effect shouldn't be possible. I'm eager to get it out of the way because I really want to a filled 3D polygon spinner; I have a neat idea for a technique that should make the thing fly.

On the day I bought Advance Wars I also bought a C64 DTV. Its very cool, and I'm really excited about the idea of building a full computer out of it - its a C64 but with some nifty new features, mostly new graphics modes and more colours. I noticed the colours were really washed out so some searching revealed a detailed analysis and fix of the problem. I implemented it on Sunday night, in a horribly hackish way with big chunky resistors across SMD resistors, with a pair of pliers and my bulky old soldering iron. My games now look beautiful.

My two goals for my DTV is to build it into a keyboard, and to give it a SD card reader. Its still pretty far off, but it should be a fun project.

That reminded me of just how much I enjoy doing electronics work, and I've always wanted to design and build a computer from scratch, so I started looking into that again. Of course I love the 6502 and friends, and I always assumed that that is what I'd use. These parts are just a paint to get hold of though (no local source), so I've been looking for something else. I've pretty much settled on the Atmel AVR line of microcontrollers - the instruction set looks closer to the 6502 than the main alternative, the Microchip PIC, and seems to have less limitations.

For my first trick, I want to implement a video generator, which would eventually become the video subsystem for my computer. I've found a couple of projects where people have done this before (AVR-based or PIC-based), and it doesn't seem overly complicated. Initially it'll be a standalone generator with perhaps a simple command setup so that an external processor can drive it, but there'll be no general-purpose framebuffer (ie, user programs won't be able to draw directly). I will want the ability to do direct drawing eventually though, which means DMA blitting, memory sharing (eg like the C64 does with its wacky CPU-disabling stuff, or dual-channel RAM. Or maybe something else, I don't know yet. Its why I'm avoiding it for now :P

I decided that if any of this is ever going to happen, then I'm going to have to do it a little bit at a time - if I wait until I have time and money to do it all at once, it will never happen. To that end I went to Jaycar last night and bought a logic probe, something I've been hankering for for over 18 months. The game is afoot!

friday, 3 august 2007

posted at 10:47
tags:
  • mood: tactical
  • music: pearl jam - rearviewmirror

Long time hey. I'm having a week off. So far I've been away for a couple of nights alone with my wife, been to the zoo, visited my mum and my in-laws, had dinner with some C64ers from years gone by and done several hundred other odd jobs. Its all been lots of fun and very relaxing.

I haven't written any code and have barely even touched my laptop, all because of a highly addictive game that I picked up last week. With the DS being as portable as it is, I've had a lot of time to play - on holiday, in the car, before bed, etc. I typically play the DS for maybe an hour a day and the battery lasts a couple of weeks. Its currently in the middle of its fourth charge since Saturday. That should be telling.

So far I've nearly finished the campaign, then I'll dive into the standalone maps and start trying to unlock stuff. I'm not going to review it or anything like that, but if you care at all about my opinion (and you don't) you will go and buy this game today. You'll buy a DS to go with it if you don't already have one.

I'll probably try and get hold of the earlier GBA versions too. From what I can tell they're just as good, and the story ties in too. And there might be extra unlockable things in the DS version if you have the GBA carts inserted. Oh, and there's another one coming out in December, supposedly. I may never write any code again.

thursday, 26 july 2007

posted at 09:11
tags:
  • mood: toys
  • music: katie melua - piece by piece

Gub got a nice tax return a few days ago (love family tax benefits) and insisted that I spent some of it on myself. So I looked around and found the RAM upgrade for my laptop that I priced at $169 a couple of months ago was now down to $99. I'm not passing that up, so as of yesterday at lunchtime my laptop has a nice new 1GB stick in it, taking me up to 1.5GB (ish, you know how inaccurate all these numbers are). Its noticably faster.

The spending spree isn't over yet - I've also been instructed to buy a DS game. Advance Wars: Dual Strike looks like it will be the winner. I'll go looking for a pre-owned version from EB tonight. If I can get it cheaper, then I'll have a little bit over to buy a C64 DTV, which I notice Target had squillions of at $25 a pop.

New toys make us powerful and happy :)

sunday, 22 july 2007

posted at 16:01
tags:
  • mood: ninja
  • music: jeroen tel - cybernoid ii

I've been working on the polar coordinate plotter over the last few days, and I'm making some good progress. This, however, is not a circle:

I know why its happening, but haven't yet worked out a fix. I'll blog about it later, once it get it working properly.

Anyway, I wanted to do something different in the spare couple of hours I had yesterday, so I wrote a little tool for working with character sets. Its available over at toybox: 64charset

It has three basic functions: create a charset from a definition file, decompile a charset into a definition file, and view a charset. What this gives is the ability to edit a character set in a text editor. Its not as convenient as working in a real editor, but for quick edits and hacks you can't beat it.

It was a hell of a lot of fun to write, and I had the whole thing working in about three hours of actual work, which I was really impressed with. Perl is just an incredible language for getting shit done fast. This fact is nothing new to me, I've been using Perl for as long as I've been using Unix, but every now again you get reminded of just how truly excellent it can be. The SDL bindings are pretty awesome too.

tuesday, 17 july 2007

posted at 17:30
tags:

Over the last couple of days I've finally made some decent progress with the C64 version of the tunnel:

My reflection code isn't quite right yet, so its disabled here. The fact that this is a static image isn't really hurting you much - if you were running it for real, you'd being watching it in all of its glorious three frames per second. Pretty woeful, really.

I haven't really optimised the code much. I did spend some time on the plot routine this morning, which can now plot a single [x,y] point in 54-55 cycles. I feel like I should be able to shave perhaps ten cycles off, though I'm not seeing where yet. My clear routine, as usual, is the killer - its still a naive implementation that clears the entire bitmap. I attempted to make it a little smarter by having the plot routine record which page its writing to, and only clearing the dirty pages (via some hefty unrolled loops) which gained a me a single FPS, but its not really enough. I want to only clear the points that were plotted, but at a potential 4096 plots (64 rings x 64 plots per ring, since every eighth point is plotted), the cost would be prohibitive - it would take 8K to store all the memory pointers for those plots, which would at least triple the time taken to clear as the naive approach! I need to think on this more.

The real killer though, seems to be the fact that every point requires a bunch of table lookups, at least three: the ring offset indexes, then the ring offset itself, and the ring point position as well. I don't see how I'm going to able to optimise this much. It is in a tight loop so even small gains should make a difference, but I'm hoping to get this thing at least to 25 FPS (50 seems a far off impossibility at this stage).

I do have another idea though. Since the whole thing is just drawing circles, it occurs to me that rather than using a [x,y] plotter, I could be better served by a [r,θ] plotter (ie one that uses polar coordinates rather than cartesian coordinates). This way I could do away with all the circle table, because the rings would just drawn by looping θ from 0 to 255. Each ring is achieved by looping the radius.

Of course, the plotter then needs to know how to convert r and θ into memory locations, but that should be easy enough with appropriately-constructed tables. The equations for converting polar coordinates to cartesian coordinates are:

x = r * cos(&theta;)
y = r * sin(&theta;)

That multiply is a problem, but I've recently found out about a super-fast multiplication technique based on logarithms. In essence:

f(x) = x^2/4
a*b = f(a+b) - f(a-b)

So we keep a table of squares which lets the multiplication happen very quickly. After that, the x and y offset are added and the point is converted to a memory location and plotted. I'm hoping that by spending some time working on the tables I can embed a good amount of the memory location stuff in the tables themselves, reducing the amount of work that needs to be done.

I've still got half an hour of bus trip left. To work!

sunday, 15 july 2007

posted at 21:51
tags:

Friday night I went to my brother's place along with a few other guys to play some games and have some fun. I had a ball, stayed up all night and much of the morning playing various games - Halo, Unreal Tournament (& UT2004), Half-Life 2 and Counterstrike Source. After that, I caught an hour sleep on the couch, and maybe another hour in the car on the way home (I wasn't driving). Understandably, I felt rather groggy through the afternoon and Gub finally put me to bed at about 5.30. I woke up an hour later and was completely useless - it was like my body was still fast asleep and no matter what I did I couldn't convince it to move. At seven Gub sent me to bed and I slept right through until nine this morning. I've felt tired most of the day, dozed through church (oops), and now its 10 and I'm preparing to head to bed ready for my 6am start.

So in short, I had an awesome time playing games, but lost the entire weekend in recovery. The obvious answer would have been to play all night and then sleep all day, but then I'm afraid of not being able to sleep at normal time on Saturday night. Am I just too old (27 by age, though with five years of marriage and a kid and another on the way, perhaps my bones are older)? Is there some optimal combination of play and sleep that will let me be normal on Saturday evening and Sunday?

sunday, 15 july 2007

posted at 16:56
tags:

Its been another busy week. As expected, work was crazy, and will continue to be for a few weeks yet, so its unlikely that you'll see massive amounts of code coming from me; I'm just too tired after a whole day of hacking too. But I still get little bits done, and make steady progress.

I'm still working on the tunnel. During the week I put together a first cut of the code, which naturally didn't work. I'd hadn't realised just how much debugging tools make life easier. Its being a complete pain to debug - I can't print values out at key points! I have the VICE monitor only, which only lets me inspect memory, and since my intuition hasn't come back yet those numbers are really difficult to use without a good mental model. I have managed to debug my point plotting routine so far, so thats something. Next time around (tonight if I'm not too tired, otherwise on the bus tomorrow) I'll be stripping back the code to a straight circle drawing routine, and then building it back up from there. I have a feeling this may yield some optimisations in the end, as in my efforts to port my C version I lost sight of the fact that all this routine needs to do is draw circles of various sizes, nothing more. Here's hoping.

I had a bit of an epiphany this morning about how to make filled multicolour vectors run really fast. It works off the fact that only the face edges have more than one colour. The areas within each face are just a single colour.

Naturally, I'd expect that I'd just have to fill in each and every pixel in that space, but not so. Since they're just solid blocks of colour, I can modify the value in the colour map and effectively "paint" an 8x8 area by just storing to a single memory location, rather than eight (assuming a heavily-optimised routine). Clearing also becomes easier - only the locations that have an edge on it need to be cleared, again speeding things.

I have to assume that this technique has been thought of/used in existing routines, because filled vectors got boring in 1994, but I'm still happy that I came up with the idea independently. Improving my vector code will be my next trick after the tunnel, I think.

friday, 6 july 2007

posted at 22:01
tags:

I have a development environment for demos! I didn't have much luck figuring out the cc65 linker - I think its really meant for "proper" linking, like real relocatable Unixy kind of things (and I'm being highly descriptive, I know). The assembler is good though - I like the format (very close to Turbo Assembler), and via cl65 its trivial to run.

So, I've written a couple of tools to make things work the way I want. 64asm loops over all .s in the current directory, extracts their start address from a .ORG directive, and calls cl65 to assemble them. The start address gets embedded into the filename for 64link. This one takes multiple files with their load locations in the filename, loads them all into a "virtual" 64K memory space, and then spits the whole lot out in a format that can be loaded directly by a C64. Now building a demo part from source and pulling in all its necessary pieces (music, graphics, tables, text, etc) is two simple commands. The gory details are over at toybox.

There's a few other tools I'll need as time goes on - a sine/cosine table generator, an ASCII-to-screencode and a code relocator, and probably others. I'd also like to write a proper 64 assembly mode for VIM with syntax highlighting. And while we're at it, a crazy-smart hex editing mode for VIM would be totally awesome - every hex editor I've tried so far is complete pants.

I've been refactoring an unreleased demopart from 1997 to get a feel for the new setup. Its working perfectly, which makes me happy. I wrote some rather shoddy code in those days so I've been able to clean up a fair bit of it. Most notably, I've removed all the self-modifying code - its really not a smart idea. I've also been adding comments, but its tricky - I didn't write any comments the first time, and the whole thing is based on some rather complex maths I worked out when I wrote it (reducing rotation matrix calculations so that there's no multiplications in it), so I don't quite get all of it. Its proven the development environment is solid though, which makes me happy.

Next is to implement the dot tunnel :P

thursday, 5 july 2007

posted at 15:56
tags:

Fast update, just about to head out of the office for another day.

Work has been psychotically busy, leaving me with very little brain space at the end of the day, so I haven't done much code in the last few days - its just too hard to think. What I have been doing is updating my entry and the System entry over at the CSDb. Finally, you can get some insight into where I came from. There's still a bit of stuff to upload and cross-reference, but its much more complete than it was a week ago.

I'm also idly pondering this demo that we're wanting to make. I've started experimenting with the assembler portion of cc65, because it seems to have pretty much everything that I might want. I'm not yet sure how to get it to link things using a custom memory map rather than it just assuming it has the run of the place, but I think I know where I'm going wrong. I'll have another crack at it on the bus, using the vector part I wrote TEN YEARS AGO this month. Good grief.

After that I guess I'll poke at something AROS again. I am unfocused at the moment, but I'm fortunate that AROS is too - there's plenty of places where I can fiddle for an hour, fix or add something tiny, and it still counts as progress.

monday, 2 july 2007

posted at 10:40
tags:

I've been back working on fat.handler this weekend. I had to look at the code for something and actually found it kind of interesting, which I thought was long past.

First thing was to add 64-bit support, so that it can handle partitions larger than 4GB. This was pretty easy, just new code in the cache to probe the underlying device to see if it supports 64-bit extensions, and then later if a request comes in for data that is over the 4GB bound ary, use a 64-bit read or write operation rather than the standard one (or error, if the probe didn't find any 64-bit extensions). There's three commonly-used 64-bit extensions in the Amiga world - TD64, New-style TD64, and DirectSCSI. The first two are supported, but DirectSCSI shouldn't be hard to add.

I haven't done any testing yet. Its basically impossible to test in hosted, as fdsk.device doesn't have 64-bit support, but adding would mean that DOS would need 64-bit support too (since its a loopback-type device). ata.device for native has support, but that means needing a large FAT partition installed on a real box, or in VMWare, and to do that I pretty much need to install an OS that uses it. So far I've tried FreeDOS which crashed, and DR-DOS which created the partition but couldn't write the partition table for some reason. The next thing to try is Windows 98SE/ME/2000, all of which could use large FAT partitions. The code should be available in tonight's nightly build, so if you want to test before I get chance let me know how it goes.

This morning I started implementing write-back caching. The concept here is pretty simple - when the handler asks the cache to write some data, the cache reports success immediately but just marks the data as "to be written". Then at regular intervals (eg five seconds) it writes all of these "dirty" blocks out to disk in one go. This makes things feel faster for the user, and has the potential to reduce disk activity (== less wear and lower power consumption), at the risk of losing data in the event of a power failure or loss of the device (like pulling the disk out). Typically removable media uses write-through caching (ie write immediately), while fixed disks use write-back.

Since this requires a seperate task that sits and waits and flushes the dirty blocks when called, it means the cache needs locking. Locking will also be needed in the future if a filesystem wanted to be multi-threaded (and the cache is actually in a cache.library, available to all). I've partially implemented the locking - so far there is locking around cache operations, but not block operations.

I hate that there's no way (in most locking schemes, not just AROS) to promote a read lock to a write lock. Usually you have to drop the original lock before taking the write lock, which means there's a moment where you're not holding any lock and someone can come and steal it out from under you. I have a workaround for POSIX threads that I'm using in production code, but it requires condition variables which we don't currently have for AROS semaphores. I think for the cache it won't be a problem, but I'm thinking carefully about it because deadlocks are just too easy.

thursday, 28 june 2007

posted at 16:07
tags:

Last night I took a break from AROS and starting fiddling with some graphics stuff. A few of us old Aussie C64 guys are planning a get together soon, with a hope of producing a new demo. Whether that will happen or we'll just end up drinking or goofing off remains to be seen, but I'll go in with the best intentions.

Something I always wanted to write for the C64 is a dot tunnel. Its pretty simple in concept - plot a number of concentric rings and move them about in such a way that it looks like you're flying down a never-ending circular tunnel. Last night I started fiddling with the concept using SDL. This morning on the bus I got the last kinks out and came up with this:

I brought it into work and showed Sam (our resident math/graphics geek), and suggested drawing some lines to see the effect better. I hadn't thought of this initially because there's just no way the C64 will be able to cope with plotting that many points each frame, but it was certainly worth a look. We made it plot lines instead of dots to draw the circles, and then added some nice lines in between the circles to give even better depth:

(It looks better when its moving, really).

Its a great hack, with no real 3D involved. At boot it calculates 64 circles with ever-decreasing diameter, and two tables (one sine, one cosine) for the movement. Each frame it computes the new location of the inner circle, but remembers the previous locations and plots each other circle at one of the old positions, moving outwards. Its kinda hard to explain here in text, but I can post some code if anyone is interested.

I've done some basic sums and at least 20KB, possibly up to 40KB (depending on how I store the graphics and the ninth x-coordinate bit) of table space will be required to port this to the C64. Should leave just enough room for colour, music, code and a loader :)

tuesday, 26 june 2007

posted at 23:34
tags:

Status update.

I have a mostly-working test.library that implements a TAP producer. So far it does TAP output, has basic success/fail counters and a couple of other bits. The idea is that it will support the minimum amount of generic primitives to build just about any kind of test on top of, with some smarter stuff (ok(), is(), etc) built as macros on top of that. See Perl's Test::More and my own Tests for C for an idea of where I want to take this.

Nothing much else is happening code-wise, mostly because I'm in the middle of a hardcore development project at work (a virus scanner for Lotus Domino) which is on a tight schedule and isn't leaving me much brain space for other code. I did bootstrap a SDL HIDD for AROS; it doesn't work yet but its something I'd like to take another look at when I get chance.

saturday, 23 june 2007

posted at 16:37
tags:

(Slight exaggeration in title? Possibly.)

I took Francesca bowling today, which we've never done before. In many ways, we still haven't. I bowled my first frame and then it was her turn. She picked up her pink ball from the rack, held in both hands because she's still small, and then tripped on the step. Her hands hit the ground, and the ball on top of them. She screamed of course, and shortly after we had blood everywhere.

We went to the nearest local doctor, who took one look at it and sent us to emergency at the hospital. Being a kid she got fast-tracked, so we were in and out in under four hours. She had to have X-rays, and it turns out that she fractured the tip of her left pinky finger. She has a horrible jagged gash on the tip that goes right down the bone, and she's ruined the nailbed, so she'll lose the nail. Being so little they just dressed it and sent her home. Young bodies heal quickly.

Francesca was amazing through the whole thing. She was her usual cheerful self through the entire ordeal, happily playing and laughing with me while we waited. She never complained even when it was difficult to do something with only one hand; she just found a way to compensate. The only time she cried was near the end when they were doing the final dressing, and those cries were mostly her saying "I'm tired and I want to go home." Fair enough - by that time she was two hours past her nap time and hadn't eaten for four. We shared some banana and some chocolate just after they let us go, which was nice.

I asked her about bowling on the way home. She's not exactly bubbling with excitement at the prospect of doing it again, but neither is she averse to the idea. Its not often that you're left in awe of your children, and its really nice when it happens :)

saturday, 23 june 2007

posted at 01:50
tags:

Its nearly 2am, and I'm sleepy. Just a quick one to give you an idea of where this is going:

extern TESTFUNC(init);
extern TESTFUNC(files);
extern TESTFUNC(cleanup);

static TestFunc tests[] = {
    TESTNAME(init),
    TESTNAME(files),
    TESTNAME(cleanup),
    NULL
};

int main(int argc, char **argv) {
    struct TestSuite *ts;
    struct TagItem tags[] = {
        { TS_Plan,        (IPTR) 3     },
        { TS_Functions,   (IPTR) tests },
        { TAG_DONE,       0            }
    };

    ts = CreateTestSuite(tags);
    RunTestSuite(ts);
    DestroyTestSuite(ts); 

    return 0;
}

This is a harness bootstrap. The details are squirreled away in test.library. Note how simple the harness is - it can be generated from your test sources.

Bedtime. More tomorrow, perhaps.

friday, 22 june 2007

posted at 21:09
tags:

In typical fashion, I got bored with the pipe handler. It'll still have to be done, of course, but a couple of things about it became non-trivial, so I had to start thinking and designing, and thats not interesting or fast, so I couldn't be bothered anymore.

What did interest me is a couple of bugs that appeared in the queue: #1740715 and #1740717. Both were described very well and so were easy to reproduce. The first was clearly related to work I'd been doing, as it involved ErrorReport() requesters. The other I just had a hunch about.

Both have now been fixed and committed, which was surprisingly satisfying. Part of my dayjob is triaging and working on calls from users, but I'll admit I try to avoid it and let underlings read it (and I know at least one of them reads this, and I really appreciate your efforts!). These were fun to work on and fix though, and looking through the queue I see a few others that could be taken care of with ease also. I've asked for access to the bug tracking system so I can close calls. They might make a nice diversion when I'm busy and distracted like I have been this week.

Tonight I'm doing a few more odd jobs. I'm poking at Staf's ABI v1 branch, which I've offered to help out with on the DOS and BPTR/BSTR conversion. I'm messing with a test harness which I hope to use to make a test suite for DOS. I'm poking at GCC 4.2 and trying to get an up to date C++ compiler/runtime so I can fiddle with WebKit, various games, etc. And I'm sure there's other stuff I'll poke at before the evening is out. Drop into #aros on irc.freenode.net and ask me how its going :)

friday, 22 june 2007

posted at 11:35
tags:

I just dug out my copies of C64 demos by System and all the source code I could find for stuff I wrote at the time. Most of it is going to be unintelligible unless you've done C64 assembly yourself, but its useful for historical purposes, I guess. All I ask is that you be ignore the poor readability, lack of understanding of the hardware features, self-modifying code, and combination of angst and bravado, both misguided. When you're a teenager (this code ranges from 1992-1998, so I was 12-18) you say stupid things :)

(Shockwave has more stuff that may or may not include stuff by me. Same warning applies).

thursday, 21 june 2007

posted at 13:46
tags:

cataclysm.cx was suspended last night because I didn't pay the bill. In my defense, the registrar never sent me an email telling me it was due; I've asked them about it. No permanent harm done though.

Due to the way they implemented the suspension email destined for me should have been queued by remote mail servers, and so I should get it as they retry over the next day or two. If you sent something and you think I may have missed it (ie you got a bounce), try again, or ask me online.

monday, 18 june 2007

posted at 21:06
tags:

Today I checked in my pipe code. It consists of the Pipe() system call, the FSA_PIPE and ACTION_PIPE definitions, the implementation in pipefs.handler, and the changes to shell and the C library to use it. Its nice to have it finished off, but then I got on the bus on the way home and realised I didn't really know what to work on. I've sort of forgotten what I was doing before I started on this stuff, but I've also seen a lot of bad code while implementing this stuff and it bothers me to leave it alone.

So, I've decided to reimplement pipefs.handler. I can justify it because it will have to be reworked for packets eventually, and it doesn't actually implement the features and interface of the AmigaOS Queue-Handler which provided the same facilities. It'll also be an excuse to clean up the horrendous code in pipefs.handler.

The 3.9 SDK has a pretty good description of what the interface is like. Basically, you open some PIPE: object and write stuff to it. The data you write gets buffered in the handler. When something else opens the object and reads from it, it gets everything that was read into the buffer. If the buffer is empty when the last user closes it, the object is destroyed. Otherwise, it hangs around in memory until someone else opens it and reads from it (or you reboot).

Pipes are named and system-wide. PIPE: alone without a "name" (or "channel" as the SDK likes to call it) is still named, and will still do the same thing as a pipe where the name is specified. The name can have CON:-style options to specify buffer size, which of course we can extend in the future.

I should be able to copy a fair chunk of code from fat.handler. The number of packets that need implementing is minimal: FINDINPUT, FINDOUTPUT, FINDUPDATE, READ, WRITE, END, IS_FILESYSTEM and of course PIPE. packet.handler will also need a translation for ACTION_PIPE. In theory this seems simple - I hope it turns out that way.

sunday, 17 june 2007

posted at 21:34
tags:

A pipe created with Pipe() has two handles. Therefore, the initial use count should be two, not one. Obvious stuff, really.

saturday, 16 june 2007

posted at 15:48
tags:

I'm having a wonderful time delving into lots of bits of AROS that I haven't seen before. What started as simply wanting to make IN: work has lead me into the depths of the shell and beyond.

As discussed, I've implemented a new system call:

LONG Pipe(CONST_STRPTR name, BPTR *reader, BPTR *writer);

It returns two pipes attached to the same "thing", denoted by name. I expect that this will be a simple volume specifier with no details (eg PIPE:), but could potentially have console-like options attached to it (say PIPE:4096 to set the buffer size). It could conceivably also take a full path for use with named pipes. Whatever, its mostly there to allow flexibility for the future.

Underneath, I've defined a new IOFS action and a new packet:

#define FSA_PIPE 45
struct IFS_PIPE {
    STRPTR       io_FileName;
    struct Unit *io_Writer;
};

/*
LONG                      (dp_Res2)
ACTION_PIPE(BPTR lock,    (dp_Arg1)
            STRPTR path,  (dp_Arg2)
            BPTR reader,  (dp_Arg3)
            BPTR writer); (dp_Arg4)
*/
#define ACTION_PIPE 1800

Of course, only the IOFS version is used so far, and I haven't implemented the translation in packet.handler yet. I'll probably wait until the time that its needed - its about eight lines.

I've modified pipefs.handler to handle this new action, and its doing well. You call it with just PIPEFS: as the name, and it creates a single pipe, two handles on it, and returns them. When both handles are closed, the pipe disappears.

Next, I taught the shell about it. Its internal Pipe() function which Open()'d PIPEFS:__UNNAMED__ and did some duplication and mode-changing magic is now gone, replaced by a call to the new system one. To test, I wrote a tiny program called minicat that acts like Unix's cat - opens the named file and puts it onto standard output, or if you don't specify a file, reads from standard in:

int main(int argc, char **argv) {
    BPTR in, out;
    char buf[256];
    LONG len;

    if (argc > 1) {
        if ((in = Open(argv[1], MODE_OLDFILE)) == NULL) {
            Fault(IoErr(), "minicat", buf, 255);
            fprintf(stderr, "%s\n", buf);
            return 1;
        }
    }
    else
        in = Input();

    out = Output();

    while ((len = Read(in, buf, 256)) > 0)
        Write(out, buf, len);

    if (argc > 1)
        Close(in);

    return 0;
}

Running minicat somefile.txt | minicat does exactly what you'd expect. minicat somefile.txt | minicat IN: does the same thing. This confirms it - pipes are working, as are the standard streams. Hurrah!

Something I did notice when watching the pipefs.handler debug output is that when using a shell pipe, the shell actually seems to be closing a side of the pipe that it already closed. I haven't looked into it in depth, but it seems that it closes both halves of the pipe when one of the commands complete, but it spawns the second command with PRF_CLOSE* flags so it tries to close the pipe on shutdown. It can be plainly seen in the pipefs output as the usage count of the pipe drops to -1. Of course at that point the pipe doesn't even exist any more, and memory has been freed. I can only assume that its the lack of memory protection that has allowed this to go unnoticed for so long. I'll dig down into that a little this afternoon.

And the point of all this hacking, if you remember, was to make it so that Type something.txt | More IN: would work. Well, after all this, it doesn't. From what I can tell, it never could have worked, because of the incorrect way it allocates its internal buffers. It tries to allocate enough memory to hold the entire file in memory, but if the file isn't a "real" file (ie its a console), then it just allocates a 64KB buffer instead:

    if (IsFileSystem(filename)) {
        Seek(fh, 0, OFFSET_END);
        new_filelen = Seek(fh, 0, OFFSET_BEGINNING);
    }
    else
        new_filelen = 0x10000;

The problem here is that PIPEFS: is a filesystem (it has directories, named files, etc) and returns 1, but files aren't seekable. Thus, new_filelen becomes -1, which causes an error further down, and the program aborts.

The right way to do is is to test if the handle is seekable. If it is, then More should read the files in chunks and Seek around as the user moves through it. If its not then the only option is to read it from start to finish, so More should maintain its own in-memory buffer, growing it as necessary. That is more than a trivial change though and I'll need to study the More code in depth before deciding whether its something I want to work on right now.

friday, 15 june 2007

posted at 20:54
tags:

Last week I got a new workstation at the office. Its a lovely piece of kit - 2GHz Core 2 Duo, 4GB RAM, dual 19" LCDs, etc, etc. I'm very happy with it - its fast and shiny, and its always nice to start with a fresh OS install. Around the same time, the box that does the AROS nightly builds died. Staf Verhaegen, our buildmaster, had expressed an interest in getting out of the nightly build game, and with a grunty box that sits idle for sixteen hours of the day and is hanging on the back of one of the fastest networks in Australia, it seemed that I was in the perfect position to help out.

Last week I got the nightly build process up and running, and I've been doing a full build every night. Its been working very nicely, so from tonight it will start uploading builds to aros.org. Builds start at 22:00 my time (currently +10, so 12:00 UTC) and usually take 80 minutes to run. So you should all be able to get your fix soon :)

wednesday, 13 june 2007

posted at 08:51
tags:

Named pipes are like normal files in that they can be created and deleted. The difference is that when you read from it, nothing happens until something else writes to it. Then the reader gets a copy of whatever the writer wrote. Usually there can be multiple readers and writers.

We need to create an "unnamed" pipe. This is different from a named pipe in the following ways:

  • When the last reader/writer closes it, it disappears.
  • Because it has no name, there's no way to open it again onces its created.

AROS implements it by opening PIPEFS:__UNNAMED__ for read, then using the "duplicate with new mode" misfeature of Open() on this handle to get the write half. Internally, pipefs.handler recognises __UNNAMED__ and sets a flag to tell it to automatically destroy the pipe when the last opener drops off.

This is currently not working because I changed Open("", mode) to be implemented as (sans error checking):

    lock = LockFromFH(pr->pr_CurrentDir);
    newfile = OpenFromLock(lock);

As you can see, it entirely ignores the new mode. The way it used to work before I broke it was to dig the device and unit out of the "lock" (actually a filehandle on AROS), then call FSA_OPEN_FILE with the new mode. I could simply revert to this behaviour, but longer term this won't work becasue locks and filehandles won't be equivalent any more, which means the handle can't be assigned to pr_CurrentDir, and so Open("", mode) won't know what its supposed to be duplicating.

OpenFromLock() (or its 1.3 Open("") counterpart mentioned previously) also can't be used because not all filesystems use locks - there's no guarantee that a lock can be obtained from a filehandle. Of course we could just make sure that the pipe handler does use locks, but that places a fairly big restriction on its internals, making it harder to be replaced.

I've done some research it seems that there's no standard interface for unnamed pipes (where they've even been available). The usual way seems to be to generate a unique filename (based on a timestamp) and use that. It works well enough, but it does require that the pipe be deleted afterwards. There was also something called ConMan that had a ACTION_DOUBLE packet that would return a pair of handles (like the POSIX pipe() system call). I really like that approach, but would prefer to not have to extend the API.

On the other hand I can't see a way to do it without extending the API. For the pipe to be truly unnamed, you need to be able to return two handles from the same call (like ACTION_DOUBLE). Its not a terrible approach.

Do we really need unnamed pipes? The only place its currently used is in the shell (to implement '|'), so could the shell just have it built in? Of course it could, but the implementation would be almost as complex as a handler anyway, and it makes sense to have the function available to other things, like POSIX emulation (pipe(), popen(), etc). Obviously they'd be a good thing to have.

I can think of other ways to do it where the handler could infer the requirement for a private, auto-closing pipe (like two open calls on the same name immediately after each other, one for read and one for write, followed by some other call, but that kind of thing is too easy for a programmer to get wrong, and doesn't read well. I think a direct call is what we want.

I'm going to look at two approaches: An AROS-specific Pipe() call in DOS (LONG Pipe(BPTR *reader, BPTR *writer)) and if that can't work (eg not enough space in the table), a new "mode" like ACTION_DOUBLE. I'll start experimenting tomorrow, most likely.

tuesday, 12 june 2007

posted at 19:19
tags:

I like to write about what I'm doing. I'd actually write a lot more than I do except that I like to take the time to make what I write at least semi-coherent. When things get busy that means I often don't have as much time to write, which means it takes longer to write about everything I want to, which further demotivates me, and on and on.

Today I discovered Twitter, which is a "microblogging" service. Basically it lets you quickly write tiny snippets of whatever and have them appear in a steady stream. I don't know if its going to work for me or not, but I thought it might be a good way to quickly get my ideas down when I don't have time to write.

The sidebar on this page now has a live feed, courtesy of Velvet Sarah. I'm liking TTYtter as a client (very MUDish), though I have a few ideas for a XFCE panel client.

Anyway, we'll see how it goes. Let me know what you think.

tuesday, 12 june 2007

posted at 15:38
tags:

I started writing an entry about Open() and its edge cases this morning, but it made very little sense, which I suppose is fitting. I got to the end of the day just now, and read back over it, and decided that it was dumb. My brain is mushy anyway after the madness of today, so I'm not going to go into it much.

In short, Open() works like you'd expect (turn a name into a filehandle) except when you call it with an empty name, which makes it open a file using a lock in pr_CurrentDir, unless you're on AROS where handles and locks are the same thing, and then it duplicates a handle. Since the mode flags are taken into account, its not a pure duplication, yet its called on the original object, so the semantics are slightly different to just opening the file again with its original name. This subtlety is why pipes are currently broken.

Further, FSA_OPEN (the IOFS action underneath Open()) with a filename of "/" actually means "open the parent directory", ie ACTION_PARENT under the packet system. Our console handler didn't know this, causing More IN: to crash the system.

I'll shortly decide on the One True Way, and fix everything. Technically its an API break, but this is a non-trivial corner case and it needs to be fixed - its not something that will break most (all) existing programs.

sunday, 10 june 2007

posted at 20:46
tags:

So Pavel went on holiday this weekend but didn't want to hold me out, so he graciously offered for us to switch places - I check in my GetDeviceProc() and ErrorReport() patches, and he'll take them on holiday with them and update his code to match my changes. This was done, so the current nightly build should have the new code in it. Type Dir FOO: into the shell to see!

The next DOS work for me is to sort out the whole mess with STDIN: etc. I think I've figured out a way to deal with it. GetDeviceProc() will be updated to return a valid struct DevProc with a new flag set in dvp_Flags, DVPF_STDIO. It should be possible to work out a valid value for dvp_DevProc from the handle - only a maybe under IOFS (due to the device/unit problem), but definently with packets. Most of the time things won't care, but should a calling function need to know the difference then it can. I've already started on this.

I'm also due for a break from DOS, so I'm thinking I might see if I can track down the huge memory leak in Wanderer at the moment. I've got to be careful though - I have an idea for a memory tracker and debugger, but it could take quite a while to design and implement. I can't afford to get too distracted from DOS. I just need to stick to the goal - fix the leak, don't spend more than a few days on it.

thursday, 7 june 2007

posted at 22:32
tags:

As mentioned previously, AROS DOS has some special magical device names that don't correspond to any underlying device - IN:, OUT:, etc. Because they're AROS-specific, I get to choose how they're implemented, but I also need to make sure that the right thing happens.

Before my changes recent changes Open() and Lock() knew about them explicitly, and other calls like IsFileSystem() did their own simplified DOSList traversal and so knew how to handle not finding them there.

My recent changes have made these names only known to Open(), much like CONSOLE: and NIL:. Everything else uses GetDeviceProc() which by design only knows about what is in the DOSList. And with my ErrorReport() changes, we now get this:

This happens because of the way More accesses the file. First it calls Open(), which is fine. Then it calls IsFileSystem() to decide if it should do its own buffering or not (which is actually not the right way to test this, but thats not relavent here). Later, it calls Lock() on the name, then NameFromLock() to generate the window title. The calls to IsFileSystem() and Lock() both yield requesters because both those functions use GetDeviceProc() underneath, and those "devices" aren't in the DOSList, so it requests them.

My first thought was that these cases should be like CONSOLE:, and not work. But then I thought a little more and realised this was no good. CONSOLE: always points to a console (unless you've really screwed something up, but then your software is broken), so you can assume its always interactive, never a filesystem, not lockable and nameless - simple. Standard I/O can be redirected however. After you Open("OUT:", MODE_OLDFILE) you don't know if the handle you got back is directed at the console, a file, a pipe, or whatever else. So it is entirely reasonable to do IsFileSystem("OUT:"), etc. A solution is needed.

To fix this requires either teaching GetDeviceProc() about these names or adding tests to every function that takes a name to deal with them. The former sucks because we can't really build a struct DevProc for these names without being able to return its DOSList entry in dvp_DevNode, which might not exist if the filehandle is pointed at CONSOLE: or NIL:. The latter sucks because we need special-case code everywhere - more clutter, harder to read, harder to maintain, etc.

There's a third option: making a new handler and having it handle these names. I do like this idea, but I'm not sure its practical. I'd need to add entries to the DOSList for each name (so six total), but I'm not sure of the best way to approach that. Adding DLT_VOLUME entries is out because then Wanderer will display them. DLT_DEVICE entries could be fine but then we'd be violating the "rule" of one DOS device to one handler task. Unless we made a separate task for each, but then there's six barely-used tasks kicking around in the task list.

There's also complications in the fact that these names need to provide access to the in/out/error streams of the calling process, so they can't really run in a separate task as we need to extract the handles from the process context. Thats not so bad - a PA_FASTCALL port could take care of that.

The whole thing really is fraught with danger, but there doesn't seem to be an easy way out. I haven't thought about it much though, so I'll ponder it some more and see if there's a clean way to add the special-case checking to the requisite functions. And I'll probably need to add "fix More" to my list too, because I need more to do :P

thursday, 7 june 2007

posted at 20:34
tags:

Today I rewrote ErrorReport() . The previous implementation didn't handle most of the errors it was supposed to (which aren't actually many), and was in need of the same kind of general cleanups as everything else has needed.

Following that, I started adding error loops into a few DOS functions so that they'd bring up requesters at the proper time. Obligatory screenshot:

So far GetDeviceProc() will ask you to insert volumes and Read(), Write() and Seek() will report nicely if the operation couldn't succeed for some reason (low-level error, disk removed, etc). I've done a few other functions as well and will gradually implement this throughout DOS.

monday, 4 june 2007

posted at 10:25
tags:

Just found another problem with IOFS. There's no really good way to determine if two files are on the same filesystem, which you need to know to safely rename files and create hard links.

Under packets, every mounted filesystem has its own handler task, and so has its own message port. You just get the port pointers for the source and target files and compare.

With IOFS, two filesystems of the same type will have the same device pointer, even if they're different filesystems. Internally the device will usually have a seperate task for each, but there's no way for DOS to get that information. If you have handles for the files, then their unit pointers will be different, of course, but these are context for the file, not the filesystem and so they'll be different even on the same filesystem.

The only way I can think to do it is to use NameFromLock() to on each file (and actually lock the passed names first if they're all thats available), then compare the volume names in the returned string. NameFromLock causes many calls into the filesystem, which would make these operations hopelessly inefficient. Admittedly renaming and/or linking is not something you do often, but thats doesn't remove the fact that this interface is broken.

friday, 1 june 2007

posted at 17:16
tags:

Quick notes, as I'm nearly home. Open() handles a few "magical" filenames: NIL:, PROGDIR:, CONSOLE: and *. The last two are identical, and should return a handle on the console that the program is running from, regardless of whether or not standard input or output have been redirected. The Guru Book calls it the error channel, but of course it should be valid for input too.

Under AmigaOS, this was (probably) implemented by calling GetConsoleTask() (which grabs a struct MsgPort * from pr_ConsoleTask in the process context) and sending an approriate ACTION_FIND* packet to it to get a handle. Every console has a seperate task, so a single pointer is all thats required.

Under AROS, each console has a seperate task, but due to the fact that IOFS handlers are actually devices and so have a single global entry point, you need two pointers - one for the device pointer to console.handler, and the other for the unit pointer that represents the individual console task. Obviously two pointers can't be stored in pr_ConsoleTask, so AROS introduces a new handle pr_CES that complements the input and output handles pr_CIS and pr_COS, and two new functions Error() and SelectError() to complement Input()/SelectInput() and Output()/SelectOutput().

This arrangement works well enough but still sucks - any time you have to add a new field to a struct it sucks. Of course, this is no different to the myriad other places that this has been done in DOS to support IOFS over packets. There are a few broken bits though: Opening CONSOLE: or * for input will always use pr_CIS (ie standard input), regardless of whether or not its been redirected. Opening for output will always use pr_COS if you open CONSOLE: (same issue), but curiously will use pr_CES for *, falling back to pr_COS if its undefined.

There's also some AROS-specific magical names: STDIN:, STDOUT: and STDERR:, and their short forms IN:, OUT: and ERR:. As far as I can see they're only used by clib to provide Unix-style /dev/stdxxx compatibility. Its redundant though - we have Input() and Output() for exactly this purpose.

Further, Lock() also knows about CONSOLE: and * (but not PROGDIR: or NIL:) and about STD*:, when it shouldn't - Guru Book says these names are only magical for Open(), noone else (except GetDeviceProc() which knows about PROGDIR:).

Oh, and AROS has a real nil.handler to support NIL: (a bitbucket handle), rather than just swallowing data internally.

Thats all. My intention is to fix all this, though I don't know what order it'll happen in. I'm more just noting it in passing while I work on removing DoName().

thursday, 31 may 2007

posted at 22:26
tags:

In our last episode I was implementing FileLocks into dos.library. Well its done now. It took a couple of rewrites (if you can call mass search-and-replace operations a "rewrite") as Pavel kept pointing out problems with my implementation (and rightly so), but its done. Trouble is, I can't check it in. It turns out the aliasing of lock functions (like UnLock()) to their filehandle counterparts (like Close()) was actually getting hard-coded into binaries by the compiler. Adding the lock functions back to support locks is fine, but all existing programs are hardcoded to call Close() when they should call UnLock(). Thats fine if your locks and handles are the same, but as soon as they change, Close() suddenly finds itself being handed a lock. With no way to tell the difference, it pokes in places it shouldn't causing a spectacular meltdown. And this only affects, oh, every program ever built for AROS that accesses files. All of them, in other words.

Staf Verhaegen is working on a pile of changes that will break the ABI and API, with a view to marking the ABI "stable" when he's done. I'll include these changes as part of that so that its only one round of user pain, but it does mean that I have to hold off on releasing it, which also means that I can't really do much work on integrating packets. I can, of course, but I really don't like holding on to uncommitted changes for too long - they tend to be painful to merge.

What I can do while I'm waiting though is start cleaning and restructuring our DOS to make supporting both packets and IOFS a breeze. Things like adding appropriate abstractions and such. I've started on this, with the first object of my affections being GetDeviceProc()

This function is the one that I consider to be the heart of DOS. Its quite simple: you give it a full file path, and it returns a structure that contains handle on the filesystem device (or task message port for packets) that has the file on it, and base lock that the file path is relative to. The real magic is that it automatically handles assigns, resolving late- and non-binding assigns on the fly. As if that wasn't enough, it also has the somewhat minor task of getting new filesystems online on demand.

Thats a succint description of how its supposed to work. Reality rarely matches though. In AROS, it does assign handling all right, but doesn't load filesystem handlers. It also crashes if you try to ask for a path without a volume specifier (ie something relative to the current dir) or use PROGDIR: (a magic name for the directory the program was started from). To cap it all off, the code is quite difficult to read, which makes it hard to fix.

Furthermore, there's an internal function called DoName() which pretty much every handler call goes through that does basically the same job, but is much smarter. This duplication is completely redundant, particularly when the more advanced (and correct) functionality isn't accessable to the user. So, I set for myself an initial goal of fixing GetDeviceProc(), then update all of DOS to use it and getting rid of DoName() completely.

I've just now finished the implementation, and as far as I can tell its working very well. It seems to produce the same results as the old version, but also handles the relative dir stuff without crashing. It expands and resolves assigns as expected. It also has placeholders for calling out to a function that can get the filesystem online if its not already. This code is coming - Pavel Fedin has been working hard on some DOS changes of his own to fix up our mount process. These should be appearing in SVN soon, and once its available I'll merge my code and we'll be well on our way :)

Lots to do while I'm waiting though. The next thing is to start moving functions away from DoName(). I'll start small - Open() should be nice and simple.

Oh.

saturday, 26 may 2007

posted at 09:16
tags:

My laptop has an Atheros wireless chipset, so that meant using the MadWifi driver under Linux. It never quite worked properly, often dropping out and some times having trouble associating with my access point, but it was always good enough that I couldn't be bothered doing anything in it.

In the last week I've updated my kernel to 2.6.21.1 and Debian installed the lastest MadWifi too, and now things have got really bad. My connection to the access point will only last about 30 minutes before dropping, and it won't reassociate without a reboot. I tried backing it out but I obviously wasn't paying enough attention to what changed because it still wouldn't work. Obviously all this sucks so I started looking around for a fix.

I couldn't find anything about the exact problem, but I did find NDISwrapper, which is a shim to allow Windows network drivers to be used inside Linux. I gave this a try, not really expecting much, and to my amazement it works. I was connected for a few hours last night, and when I drop the connection and restart it it just works. Amazing.

I do like being online :P

wednesday, 23 may 2007

posted at 15:48
tags:

As noted, I've started hacking on DOS. The first thing on my list is to make it use struct FileLock correctly.

AmigaOS has two types for referring to some kind of on-disk object - struct FileLock, which can reference any type of object (file or directory) and struct FileHandle, which is only used for files, but contains extra information such as a buffer and current position, allowing I/O. Internally it contains a lock to the underlying file as well. For the most part, a filesystem handler only operates on locks, leaving handles to dos.library. (There's a couple of minor exceptions where handles are manipulated by the handler, but its not really of any consequence and so I'm not going any more detail).

When AROS was given its own filesystem API, it also did away with locks as well, using handles for everything. The main functions of the lock - providing a pointer to the handler, a pointer to the underlying file context, and the current access mode - were all added to struct FileHandle, reusing undefined DOS-private fields (fh_Func1, fh_Func2 and fh_Func3). Since the pointers accepted and returned by DOS functions are opaque BPTRs, its not actually an issue for most programs, and so life has continued happily for the past ten-odd years.

Where this system falls down is with the DOS packet functions SendPkt() and WaitPkt() (and indirectly, AbortPkt(), DoPkt() and ReplyPkt()). The problem is simple: under AmigaOS these functions don't deal with locks or handles, but with the message port the filesystem handler uses to receive packets on. That handler is usually obtained by using BADDR() on a BPTR returned by Lock() to get a struct FileLock, and then getting the port from its fl_Task.

This used to be completely impossible, as until my recent work with packet.handler struct FileLock didn't even exist on AROS, so your code wouldn't even compile. Now it does, but if you try to fish fl_Task out of a "lock" you end up with some random stuff that patently isn't a port, and so sending to it just won't work. Of course, AROS filesystems don't take packets and don't use ports anyway, which is why SendPkt() and ReplyPkt() try to do packet conversion (which doesn't really work), but some programs also like to send their own packets. If anything tries to send to the "port" obtained from the filehandle its likely the system will crash (that position in the struct is held by fh_Buf, which is the I/O buffer.

One of the other issues here is that even if locks are used, AROS filesystems don't use ports, so even if we did use struct FileLock properly the fl_Task won't be anything useful, unless populated with a port owned by packet.handler that can do packet->IOFS conversion.

The goal to remove IOFS has come from a few things. Its not adding any real value to have it, we have a handful of minor devices that use it directly (CDVDFS and SFS are both packet-based handlers with IOFS wrappers that coule be easily removed), and source compatibility isn't there. Replacing it however is a big job, so we're taking an incremental approach. The initial goal is to support both IOFS and packets natively inside DOS. The first step is to bring struct FileLock back to life, which I started on yesterday and is nearly done.

To do that these structures have been updated such that struct FileHandle no longer holds stuff to reference the filesystem, but instead contains a pointer to a struct FileLock which does have this information. The lock is held in fh_Arg1, as was always the way under AmigaOS.

With normal packet handlers, the lock contains two fields to reference the file: fl_Task which is a message port for the handler, and fl_Key which is some random data set by the handler that it can use to find the file on disk. IOFS handlers had a similar pair of fields held in the filehandle - fh_Device which is a pointer to the Exec device of the handler, and fh_Unit which is the opaque data. Pavel Fedin, in a stroke of genius that now seems completely obvious, suggested simply storing the IOFS device and unit into fl_Task and fl_Key as a fast and cheap way of bringing FileLock back.

This is fine if you only have one type of handler, but we have two, and so need to be able to tell the difference. Pavel came to the rescue here too - give struct FileLock and extra field, fl_Device. Put the device pointer there, and the unit pointer in fl_Key, and use fl_Task as a flag to determine the type - when its NULL, its a IOFS handler fl_Device is valid, if its NULL then its a packet handler and fl_Device has no meaning (and in fact shouldn't even be accessed, as locks are then entirely allocated by the handler which may have something different here (like struct ExtFileLock in fat.handler) or nothing at all if the handler is using original AmigaOS sizings).

So far I've set up these definitions and reworked the DOS internals (and a few other bits of code around the place that were using IOFS directly) to match. Its mostly a case of renaming FileHandle to FileLock, bouncing through fh_Arg1 to get the device and unit pointers, and of course allocating and deallocating lock structures in the right place. There's a small build issue to figure out (wiating for a reply from aros-dev) but once thats fine AROS should at least start, and then I can begin tracking down the several million edge cases that will probably arise from this.

If you want to see the code, ask me - not checking it in yet because I have no desire to break the tree.

monday, 21 may 2007

posted at 10:10
tags:

I implemented SET_FILE_SIZE late last week. I don't want to talk about it, read the code if you need the details. Its a nightmare and I had to rewrite it three times before it was right. It shouldn't have been difficult, as its just an exercise in linked-list management, but as usual it took me a little while to realise this.

Work on FAT is really winding down now, so I'm starting to move into hacking on DOS. The eventual goal for me is to remove IOFS and use packets natively, and to fix up all the boot sequence and volume management stuff. I did the first bit this morning.

The only real stumbling block for using packets over IOFS is the addition overhead of using messages over Exec IO calls. IO calls (via DoIO()) simply calls a device's BeginIO vector - no mess, no fuss. On the other hand, sending a message (via PutMsg()) disables task switches and interrupts, adds the message to the ports' message list, then triggers the interrupt or signal. Later, when the thing listening on the port (ie the filesystem) receives the message, it calls GetMsg() to get the message, which does another round of disabling task switches and interrupts. This overhead was deemed unacceptable by advocates of IOFS.

It is alleviated slightly by an undocumented port type. A disassembly of the Amiga 1.2 exec.library reveals that when the port type == 3, no signalling or interrupt is done but instead a handler function is called directly. I've implemented this as PA_CALL. Its good for compatibility, but still not quite what we want to replace IOFS, as it still disables task switches while it adds the message to the list.

I had a brief discussion with Staf Verhaegen a couple of weeks ago, and we came up with a solution - a new port type that doesn't disable task switches but simply calls a handler function (like PA_CALL) with the message as an argument to the function. This makes it equivalent to DoIO(). You really need to know what you're doing to use it (in particular you don't get your messages from WaitPort() and PutMsg() any longer), but it allows filesystems to be called without any additional overhead (assuming they've been written to support this) and doesn't require any changes in DOS or applications - they just call PutMsg() (or SendPkt()) like normal. I've implemented this this morning as PA_FASTCALL.

I wrote a test program, tests/timeport that sends one million messages to each of the different port types and times how long they take (including a standard PA_SIGNAL reply). The timings are for comparison purposes only, but its still revealing:

8.System:> tests/timeport
testing with 1000000 messages
PA_SIGNAL: 15.10s
PA_SOFTINT: 7.220s
PA_CALL: 3.940s
PA_FASTCALL: 2.760s

Now to commit. Hopefully there won't be too much fallout :P

thursday, 17 may 2007

posted at 10:25
tags:

After implementing the global lock stuff last week, I spent a couple of hours on the weekend making renames work. Its a naive implementation, which as we know contributes to fragmentation, but it works and was trivial to implement, which is all I care about now.

Since then I've been building support for notifications. DOS has a pair of functions, StartNotify() and EndNotify() that allow an application to receive a message when a file changes. The application passes a structure to StartNotify() that contains, among other things, the name of the file and a port (or signal) to notify when something happens to it. The most interesting thing about it is that the file is specified by name, not by lock or anything like that. Additionally, the file doesn't have to exist at the time StartNotify() is called.

struct NotifyRequest, which gets passed to StartNotify(), has two filename fields in it. The idea is that the caller sets up nr_Name, which is the name of a file relative to the current directory, and DOS then build nr_FullName to contain the full path and volume of the wanted file (expanding any assigns) for the handler to use. nr_FullName is off-limits to the application, and nr_Name is off-limits to the handler. Looking through our code, I found that DOS wasn't setting up nr_FullName at all. We only have two filesystems that support notification, SFS and ram_handler. SFS, being ported from AmigaOS, did the right thing trying to use nr_FullName and so notifications didn't work. ram_handler read nr_Name and built nr_FullName incorrectly itself, such that notifications worked.

The first thing I did was reimplement StartNotify() and EndNotify() to do the right thing. This involved doing calls to GetDeviceProc() and NameFromLock() which apparently is a standard procedure in AmigaOS for building a full path. It isn't used anywhere in AROS however, with work instead being performed by the IOFS code (DoName()). That will change when packets finally replace IOFS inside DOS, so it was good for me to learn.

Once that was done, ram_handler got changed to do the right thing and just use nr_FullName as it should. That worked, and SFS notifies magically came to life too. The stage was set for notifications in FAT.

I setup a new list in the superblock structure to hold notification requests. Each list entry holds a pointer to the struct NotifyRequest that was passed in, and a pointer to the global lock for the file (or NULL if the file isn't currently locked). When a global lock is created, we traverse this list looking for entries with no global lock. If nr_FullName matches the name of the file being locked, a link is created.

This matching process is interesting. Inside fat.handler files a referenced by two numbers - the cluster that holds the directory that references them, and their entry within that cluster. Converting a path to a cluster/entry pair is pretty straightforward - you break up the path, start at the root dir and look for each piece recursively. (The GetDirEntryByPath() function does this). Going from a pair to path is much more difficult - you start in pair directory, get the parent dir, search that dir for the subdir to get its name, then go up and do it again until you can assemble a full path from all the name pieces.

Because of this complexity, it actually works out to be faster when we want to see if a name matches a pair to convert the name to a pair (using GetDirEntryByPath()) and then simply comparing with the wanted pair. Its a shame there's no good way to make it efficient, but fortunately it doesn't have to happen too often.

A notification can be sent by cluster/entry pair or by global lock. The global lock case is easy; we just traverse the notify list and if its lock pointer matches ours, we send the notification. For the pair case, we traverse the list and compare against the cluster/entry in the lock, or if there is no lock, expand the name and compare with that. Both types are needed - when opening, closing or writing a file, there is a lock available (because the application is working with the file). When renaming a file, for example, there no locking occurs, and all we have at that time are cluster/entry pairs.

That pretty much sums it up. The actual implementation was quite simple, again suggesting that the internal APIs are spot on :)

Today I got my laptop back, so got to code on the bus again, which is very nice. I implemented code to actually checked the state of the READ_ONLY flag before allowing anything that might write. I still to need to have it check the disk write protect stuff and make C:Lock work, but now we're getting down into the minutae of this thing. Nearly done :)

saturday, 12 may 2007

posted at 23:14
tags:

I got a new mobile phone this week. My old phone (a O2 XDA II mini (also known as a HTC Magician)) has been steadily degrading over the last few months, and got to the point where both the internal speaker and the headset would not produce any sound, of course making it impossible to receive calls. The PDA aspects of the phone I loved, but I still need it to be a phone.

I've now sworn off anything from HTC, as I expect something that costs over $1000 to last longer than 18 months. Looking around a little, I settled on a Nokia N80, which arrived on Thursday. Its a sweet little piece of kit, and its really nice to carry a phone around again. I got a 1GB miniSD card as well, as I'd gotten used to similar capacity on the XDA for my music, which I really need for the bus trips.

I'm now in posession of three ARM devices: The N80 (ARM9E/ARMv5TE), the XDA (XScale/ARMv5TE), and my Nintendo DS (dual-processor, ARM7TDMI/ARMv4T and ARM9E/ARMv5TE). All three have the ability to have additional software installed, the XDA works well (just no sound) and isn't being used, and the DS has some awesome homebrew options (Daniel is showing off some fun stuff in the office). I'm running out of excuses to not port AROS to ARM.

The other cool thing(s) about this phone is the fact that it has built-in 802.11 wireless and a sexy browser based on WebKit. Its been enough to get me interested again in porting WebKit to AROS, which obviously is the major piece required to get a viable browser there.

Oh the time required. Horrors!

wednesday, 9 may 2007

posted at 11:26
tags:

Last night I finished refactoring the lock code and checked it in. I'm kinda surprised that its still working. Here's the story.

The original code pretty much didn't track the locks it handed out. It put them in the list of locks held in the volume's DosList entry, and removed them when the locks were freed, but it never looked at them. It shouldn't have been doing that anyway - that lock list is only for when the volume is taken offline (eg the disk was ejected) while locks are still open. In that case, any outstanding locks are added to the DosList. Later, if the volume comes online again, the handler detaches the locks and takes control of them. This is the mechanism by which the Amiga in days of old could request that you "please insert volume Work: in any drive".

This list was being used incorrectly, so it had to change - I have a feeling its responsible for a bug on native where you insert a new disk and both the old and the new volume appears. A real list of all locks is needed by the handler, for a number of things:

  • If an exclusive lock on a file is requested, the handler needs to know if the file is already locked.
  • Certain file attributes (like its length or location on disk) are the same no matter how many locks are open on the file. If one of those attributes change, they all need to be updated. This doesn't seem like it should be a problem, as there's should only be one (exclusive) lock on a file for these attributes to change, however traditionally "read-only" locks (as created by FINDINPUT) can actually be written to, and renaming a file (which due to long filenames may require its directory entries to be moved) should be able to happen even when a file is open.
  • Obviously, locks need to be available so they can be attached to the DosList as necessary.

I did consider just having a straight list of locks, but this meant a search through all the locks every time some shared attribute needed to change. So instead locks now have two parts: a shared part which contains the file's name and protection bits, the location of its first data cluster, and the location of its directory entry, and a per-instance part which has the current seek position and the IO handle. Put another way, the shared part has stuff about the file itself, while the per-instance part has stuff about access to the file.

The shared part (that I call "global" locks) are held in a linked list attached to the superblock structure. Each global lock has a list of per-instance (just called filelocks) locks attached to it. Each one of those has a pointer to its global lock. The system just passes filelocks around as normal (and out to DOS and back), but goes to the global lock when it needs some file data.

Now that this is in place, all the above things can be implemented. Exclusive locks are already done - when an attempt is made to obtain a lock, the global lock list is checked. If the caller requested exclusive and a global lock was found, the operation fails. Renaming now should be trivial - if the new name won't fit into the existing directory entries, then the existing ones are blanked, new ones created, and the entry attributes in the global lock are updated and seen by all filelocks).

The DosList stuff is on hold. I'll get there, its just a few steps down on the my list. The next thing I want to do after renaming is done is to implement notifications. File change notifications are done by filename, not lock, and there can be more than one per file, so I need a place to store them even if the file isn't open. This is now trivial - the notifications get stored in the global lock (which will be created if a notification is requested and the file isn't open).

So now I've written this, and it still makes sense. Weird, it felt so hard at the time.

saturday, 5 may 2007

posted at 14:23
tags:

Since fat.handler reached a pretty significant milestone I felt like I needed a break before getting into the excitement that is refactoring all the lock code (sigh), so I've just finished picking off an item on my todo list - faster bitmap scaling.

Regular readers may remember that last month I found my old port of JJFFE. I released the code and Kalamatee added code that allowed the tiny window to be resized. This worked nicely on native, but on hosted things went from a pleasant smooth-scrolling full-speed affair to a glacial one frame every few seconds - completely unusable.

At the time I did some digging and found what I believed to be the cause. The bitmap scaling code in graphics.hidd did its work by copying into the underlying hardware (or X11) driver a pixel at a time. For most of the hardware drivers, this merely poked values into the hardware framebuffer, and so worked quite quickly. The BitMap::DrawPixel method in x11gfx.hidd is incredibly slow for multiple uses though, having to lock, then plot, then flush the image, then unlock. This was happening for every one of the thousands of pixels in the image, and FFE was trying to do it every frame. Naturally, this is suboptimal.

So, I spent some time yesterday and today nutting out a fix. There may be a problem with the speed of DrawPixel on X11, but it made more sense to me to try and reduce the number of calls that the basic BitMap::BitMapScale method was making into the underlying hardware driver. The solution I decided on was to scale the image in memory and then push the new image to the hardware in one hit.

It took me ages as I know virtually nothing about the AROS graphics system, but I managed to get something working. It uses the same naive scaling algorithm as before, but now it calls BitMap::GetImage on the source bitmap to get a raw byte array, creates a second raw byte array, copies and scales the image into it, then calls BitMap::PutImage on the destination to write it out. As far as I can tell, its working perfectly.

I won't commit the patch yet because I'm quite unsure of myself and I want to get a couple of people (particularly on native) to poke at it and make sure its sane. Nonetheless I'm still quite proud of it. Unfortunately the scaling it produces looks quite ugly, particularly on FFE, so if I get into hacking on FFE some more I'll probably start looking into to smarter scaling algorithms.

Update 8/5/05: This patch is now in the nightlies. Kal reports that Wanderer with a scaled background now starts instantly rather than with a brief pause, and FFE has gone from 8 to 20 frames per second on his native build. Awesome!

thursday, 3 may 2007

posted at 21:49
tags:

Support for creating and writing files was pretty much finished on the weekend, but my power test of copying the entire system to a FAT partition would cause things to explode in myriad ways, but not until a few hundred files has been copied. I've spent this week tracking down and fixes these problems:

  • There was some problems in my linked-list managment code inside the cache which would occassionally cause an infinite loop.
  • A different bug in the same code would sometimes cause a segfault as a NULL pointer was dereferenced.
  • Following that, I noticed that something was leaking cache blocks. I implemented a function that dumps the internal cache state including block use counts after every FAT operation. The usage wasn't coming down properly when create directory entries (ie making a directory or new file). Turns out I wasn't releasing a directory handle when finished.
  • Next, writing file contents was leaking blocks too. Again, I wasn't releasing a directory handle when updating the file size in the directory.
  • Finally, I skimmed through and found a number of other places in error handling code where directory handles weren't being released.

So, I'm pleased to announce that file writing is stable. I haven't seen any corruption in my testing, but obviously take care on real data (like your Windows 98 partition). As I noted previously, there's still a few little things left to implement to declare write "done", but for the most part you won't care ;)

sunday, 29 april 2007

posted at 00:56
tags:

I'm currently checking in some fixes that I believe will stabilise the file writing. There was a few places where I'd got the math wrong when generating directory entries which would sometimes result in weird corrupted filenames (depending on what was on your disk before I incorrectly wrote directories onto it).

Assuming its right, things are really starting to wind down on this code. Here's a vague todo list, for my reference and yours :P

  • Implement RENAME_OBJECT and the SET_* commands. SET_FILE_SIZE is the only interesting one, because it means allocating or freeing clusters to match the size. None of them difficult.
  • Implement file notifications. This is a mechanism where an application requests to be notified when a file or directory changes. Conceptually its not difficult - just a list of listeners attached to every lock (and shared between duplicate locks). I think it would get a little hairy if I did it with the lock management functions in the state they're in, but I haven't started my cleanup of lock.c yet so I'll make sure I factor it into my planning.
  • Implement write-back caching.
  • Cleanup the DOS-side code (mostly main.c and disk.c)
  • Break out the rest of packet.c into ops.c
  • Make sure locks are being tracked properly. This should resolve the disk change issue on native too (where two volumes appear when you change the disk).
  • Fix the crash in native where you try to use the handler directly with DF0: (though technically this is an issue with the way DOS and packet.handler interact.

After that its time for the next thing. Volume manager, ext2 or something else - I'm not sure yet.

saturday, 28 april 2007

posted at 22:09
tags:

As of yesterday you can now create files. Its still not perfect - you can break it by copying large numbers of files in one go (eg Copy ALL C: FAT:), which will give you some broken filenames. I'm working on it.

Fairly unmotivated to write right now, but I wanted to give an update. More later.

tuesday, 24 april 2007

posted at 14:27
tags:

As I've been refactoring fat.handler I've noticed that its gradually change from its original spaghetti into a fairly layered setup - the I/O cache, the FAT, the directory code, the file read/write primitives, then high-level operations code ("create a directory", "delete a file", etc), with the OS interfacing (packet handling, volume management, etc) at the top. This wasn't intentional, but tells me I'm probably thinking about it the right way.

The OS interfacing code should actually be identical for all filesystems, which begs the question - why does every filesystem have to implement it? They shouldn't have to so, just as I intend to seperate out the cache into a library, I also intend to build a library to sit between the OS and filesystem. You could argue that this is redundant, since DOS already provides a filesystem interface. I don't intend to change that though - I'm not going to replace packets, but instead provide some generic handling code that will work for most of what you want to do. If its not suitable, then don't use it - handle the packets yourself.

I think my operations code is the beginning of the model for this. Essentially, the packet handler will accept packets, decode the arguments (ie convert BCPL pointers/strings), ensure that they're sane (eg make sure locks belong to us), then call a function in the filesystem for the requested operation.

I figure the initialisation interface would be something like:

    fs = CreateHandler(FS_Read,      (IPTR) FATOpRead,
                       FS_Write,     (IPTR) FATOpWrite,
                       FS_CreateDir, (IPTR) FATOpCreateDir,
                       ...
                       TAG_DONE);

(plus other stuff for setting options or whatever).

Any unspecified operations will result in a "not implemented" error being returned to the caller.

Further, the library would do plenty of checking and munging of arguments so you can always be sure of what you're getting. Locks will always be guaranteed to belong to the filesystem. BCPL strings and pointers would be converted to their C equivalents. Deep subdirectory ops would be fixed up so that every function wouldn't have to know how to parse and munge paths. If I had all this stuff, fat.handler would be vastly simpler than it is now, and when I eventually implement ext2, I wouldn't have to copy/paste anything but just implement the specifics of that filesystem.

Another option for the interface might be to create a "no-op" filesystem base class and have filesystems subclasss it. Its conceptually the same as the above but perhaps the interface is better. I haven't really looked at oop.library so I don't know yet what I'll do with it, but I will experiment further.

friday, 20 april 2007

posted at 12:54
tags:

I implemented DELETE_OBJECT today. As it suggested, it deletes things - its the power behind the C:Delete command. I'm actually quite pleased with how straightforward it was to put together - it suggests that I have the internal API for updating directories and writing things right. The process is to find the entry, delete it, delete any associated long name entries, and then free all the clusters the file was using. In practice its slightly more complicated - the file can't be in use, if its a directory it must be empty, etc - but its mostly quite pleasant.

One big philosophical change I made today was to make it so the current directory (.) and parent directory (..) entries are never ever exposed to DOS. It was just confusing things - DOS has its own understanding of how to move to a parent directory (/), and thought that moving to .. was moving to a subdirectory. It meant having special checks everywhere to make sure you didn't try to actually operate on these entries (eg try to delete them). In the end it makes sense if they don't exist, so now the internal function TryLockObj(), which looks for files by name and locks them, will always return ERROR_OBJECT_NOT_FOUND for one of these files. Similarly, GetNextDirEntry() which is used when enumerating all the files in a directory will skip over the dot entries. The only place now where the .. entry is used is in the internal GetParentDir() function, and it finds the entry manually.

Removing input checking code while making things less confusing for the user is not something you get to do often, so I'm pretty happy with the change :)

thursday, 19 april 2007

posted at 10:17
tags:

I put in a marathon day of code yesterday - perhaps six hours by the end - and finally got directory creations working. The process is actually quite complicated as you have to handle all the weirdness that makes FAT so wonderful. First you have to take the name, and figure out how many directory entries are needed (FAT stores its long file names across multiple entries). Then you search the current dir for a gap of that many entries (gaps happen when files are deleted) or move the end-of-directory marker to make room at the end.

Having found space, you then generate the short name, comprised of what FAT calls the "basis name" and the "numeric tail". You've probably seen this if you've used disks created in Windows on older system like DOS - a file called "mars attacks.html" gets converted to "MARSAT~1.HTM". The conversion process is non-trivial. After storing the short name, you then cut the long name up and store it across multiple directory entries.

At this point the name exists, and will turn up in a directory listing, but the job isn't done yet. Next we have to allocate space on the disk to store the directory contents, and put three entries within it - the "dot" (.) entry, pointing at the new directory (ie pointing to itself), the "dotdot" (..) entry, pointing to its parent, and the "end of directory" marker. Once this is done, we report success back to DOS and the calling application.

My code isn't perfect yet. Most significantly it doesn't do all its error checking and its possible for the filesystem to get into an inconsistent state if some lowlevel error occured (like a hardware error). It also hasn't been well tested - its undoubtedly trashing my filesystems in every interesting way. But it appears to work, and thats the most important thing. Creating directories is also the hardest bit of doing write support - the rest shouldn't take long to implement!

tuesday, 17 april 2007

posted at 09:14
tags:

I'm about to go out, but here's a couple of screenshots to demonstrate the progress of the last couple of days.

First, writing works in the most minimal sense. FAT volumes can now be renamed:

Its not much, but it proves that the underlying write infrastructure (ie WriteFileChunk() and cache_mark_block_dirty()) is at least slightly correct. I'm hoping to be writing files before the end of the week.

The other is that you can now mount your FAT volumes under native. It wasn't working because I made some bad assumptions about where the boot block lived, but once I straightened that out it did just fine:

It takes a little bit of messing to set up. If you want to try it, grab the latest nightly (or you may have to wait until tomorrow; I'm not sure if it made it in to last nights' build) and edit DEVS:DOSDrivers/FAT0. You'll need to get correct values for LowCyl and HighCyl from HDToolbox (or DOS or Unix fdisk or whatever). After that it should just be Mount FAT0:. Kalamatee is planning to add smarts to the installer to take care of detecting your partitions and writing the mountfiles when you install AROS.

Taking Francesca to the park and the creek now. I should be around in #aros in a few hours.

sunday, 15 april 2007

posted at 12:29
tags:

I'm on my break! Today is just the second day, and still the weekend, so I haven't really done much of anything yet, but it'll happen - all this time is making me giggle :)

I did massive amounts of cleaning on fat.handler during the week. The cache is being used for all I/O now. It no longer loads the entire FAT into memory which should save quite a bit of memory on large filesystems. That forced me to get into the weirdness of FAT12. As the name suggests, each entry in the FAT is 12-bits wide. So there's no wasted bits, two entries are stored in three bytes. This requires a little bit of math to extract the entry you want, which gets even more complicated if the two bytes needed are split across different disk blocks. The original code never had to deal with this because it had the entire FAT in memory in one long chunk - there was no cache blocks. I'm pretty sure I've got it right - its reading things correctly at least.

So now I've started adding the necessary bits needed for write support. The cache has new functions for marking blocks dirty, though I still have to implement the actual writing stuff. It will have the two standard cache policies available - "write through" where blocks are written immediately when they're marked dirty, and "write back" where some job fires up every now and again and writes out any that have been marked dirty since last time. Writethrough is easier to implement and safer anyway, so I'll just do that for the moment.

I have new (untested) code for writing bytes to a file, which is nearly a straight copy/paste of the read code - I'll have to do something about that. The only major difference is that when it reaches the end of the file, it allocates and links another cluster rather than returning "not found". My algorithm for finding an empty cluster is completely stupid at the moment - it just searches the FAT from the start, every time. Eventually it will start looking around based on where it found a free cluster last time it looked. I'll also be allocating multiple clusters at a time under the assumption that we're no where near writing the end of the file yet. This reduces fragmentation. Obviously if they aren't all used before the file is closed, any left overs get marked as free again. All this is for later though.

Once we're able to write files I'll start looking at formatting fresh FAT filesystems. There's a pile of options that can be provided when creating a filesystem, but the existing ACTION_FORMAT command is woefully inadequate (being designed for OFS/FFS). I had an idea to allow things to query the handler for information about extended commands and options it supports. The handler could then return a ReadArgs format string to the calling process detailing the arguments it can take for commands (format, check, etc), allowing them to tailor their interface for each filesystem without them having to know the specifics of each themselves. This is something I'll look at in a bit more detail when the time is right.

Time to go put the daughter to bed, then I'll be back into it.

tuesday, 10 april 2007

posted at 10:33
tags:

Got back from our camping trip yesterday so I've just been flaking out and doing a few chores and stuff since. The trip was great - very relaxing, and got some four-wheel-driving in as well. I've never been before. Its very challenging - I consider myself a pretty good driver, but much of it is actually counterintuitive, like the need to accelerate down a wet slope because the wheels must stay in motion, otherwise you slide. It was a hell of a lot of fun.

I've got some web work to do for a client over the next couple of days, then it will be straight back into fat.handler. Three days of work this week, and then two weeks off - hurrah!

wednesday, 4 april 2007

posted at 07:55
tags:

This morning I checked in rewritten fat.handler directory code that uses the new cache. The code is now much cleaner, and I hope more readable. At the very least its well commented and I can read it, and so can work on it. In terms of features, nothing has changed from before (except that you'll now get masses of debug information). Its an important first step though.

The next step is to rip out struct Extent and replace with naive calls to the cache. The original point of extents (and libDeviceIO) was to make it easy to request multiple disk blocks in one IO request, which enables the disk driver and/or the disk itself to optimise the request. This ability was removed when I switched over to using the block cache. I will be bringing it back later as its basically a required feature, but it makes the code more complicated. For now I just want a nice, clean naive implementation.

Following that struct ExtFileLock and the locking code will get an overhaul, which will probably lead to the packet handling code getting some work as well. All this is good. Once the entire filesystem is in a clean and stable state I'll begin work on write support. While not simple it will be vastly easier with the new code than it was before, as I'm trying to design things with writing in mind.

In other news, I was digging through old hard drives last week and found my port of JJFFE to AROS that I did a couple of years ago during my previous dive into AROS. I released the code not expecting to work on it again, but then Kalamatee did some great work to make the window resizable (though the changes make the whole thing unusable in hosted; might be a problem in AROS' X11 graphics code). That was enough to make it interesting to me again, so I'll probably keep it as a side project for when I need to clear my head. I'll set up a repository for it somewhere shortly.

Holidays are coming. I'm off on a camping trip this weekend (the Easter long weekend), and then taking two weeks off work from the 16th, and going away on the 21st and 22nd for my anniversary and birthday. By the end of the month I should be well rested and have a clear head. Of course work is still going to be insane, thats just the deal this year, but at least I'll have had a chance to reset. I'll be writing lots more code during the long break, I expect :)

thursday, 29 march 2007

posted at 20:57
tags:

Work is absolutely insane at the moment so I come home very tired, which means I'm only getting an hour or two a day to work on AROS. Despite the glacial pace I'm pleased with the progress I'm making.

I've completed the buffer cache (though the write portion is currently untested) and rewritten fat.handler's internal FS_GetBlock() function to use it. Its working just fine insofar as reading files is working. Its fairly mundane as there's only one user of it, but as time goes on it'll get more heavily used. I still have hopes for it eventually becoming a system-wide service (eg a cache.resource).

I'm currently pulling apart the rest of the code to remove all traces of its own caching, instead making it utterly dependant on the buffer cache. The first on the block is the directory code. DirCache and Extent code is being removed and the API is being changed to make it very much like the UNIX opendir() interface. You get a handle on a directory, and then call other functions to get individual entries, iterate over it, etc. This is much cleaner that what the handler had before, and the lessons learned here will serve well when cleaning up the other parts of the handler. At the end this will effectively have been a rewrite, but well worth it in my opinion.

I'm back in reading up on filesystems as well. I think the next thing I tackle once FAT is done (so still a way off) is ext2. It really is straightforward and I'll be able to borrow most of the FAT code anyway if I do it right.

I really do hope to have something to demonstrate soon. I get next Friday through to Tuesday off for Easter, then back to work for three days, then two weeks off for the Christmas vacation I never had. That should give me plenty of time to cut code, even allowing for the camping trip and the anniversary :P

sunday, 25 march 2007

posted at 14:52
tags:

I finished implementing the read side of the block cache yesterday, but I haven't even tried to compile it yet. I really don't like it. The logic seems to make sense, but it just feels wrong. I've learnt to trust that feeling. So I did the unthinkable instead. I actually did a little research into block/buffer caches to see what has come before.

Turns out I was on the right track initially. Tanenbaum's book described a scheme very similar to what I had originally devised on my own. Basically, we allocate a pile of block structures, which are the raw block data plus flags and other meta data. All the blocks are stored in a linked list, which is hooked up to a hashtable of which the key is the bottom N bits of the block number. There's also a double-linked list of recently used blocks.

When a block is asked for, the hash bucket index is extracted from the block number, and the corresponding block list is searched for the block. If its found, a use count is incremented and the block returned. If not, it has to be loaded from disk. A block gets recycled from the recently-used list, and if the data in there is dirty, it gets written out. Then we read the data from disk, and mark the block as used.

Earlier this seemed unworkable to me, but it seems this is fairly standard, at least as a first cut. The important bit is the low-level process running through the block list and regularly pruning and writing things to disk. Fortunately the whole thing is much easier to understand, so hopefully it won't take me long to have it written and the code will be readable this time. Initially I'll implement for single-block I/O, and later extend it to do multiple blocks where possible. This is also reasonable well established in OS theory - the concept is called "precaching".

Anyway, thats all. Back to my relaxing weekend - rebuilding my garage PC and playing Mariokart :)

friday, 23 march 2007

posted at 13:12
tags:

I got paid from the bounty, so I went out looting last night. I had planned to buy a 1GB stick of RAM for my laptop, but the wife told me that I had to spend it on something frivolous instead. She gave me the choice - I either had to buy a Nintendo DS or a Wii. I chose the DS, because the Wii games I'll want to play (Metroid Prime 3 and Super Mario Galaxy) aren't out yet and I still have Gamecube stuff to finish. Gub says she'll get me a Wii for Christmas :P

I've been pining for a DS ever since Daniel got one and regaled me with his tales of derring-do every fricking morning when I got in to work. I bought MarioKart DS so that I can have a clue of what he's talking about (although I already get some of it having played previous MarioKart games), and picked up New Super Mario Bros as well.

So far its fun. MarioKart is always entertaining, and I've always liked Mario in 2D, though this one doesn't yet seem to have the depth that Super Mario Bros 3 did, back in the day). My brain is not quite in the right place to play lots of games yet - still full of work and code. It'll be fun for ten minute breaks though.

It could also be fun for homebrew stuff. This card looks like a great way to get started all in one go. A bit pricey though - nearly the cost of the console! AROS could be fun on this thing, haha. Maybe later :)

wednesday, 21 march 2007

posted at 22:32
tags:

I'm currently on a tangent within fat.handler. I started to rip out the caching, but because I hoped to add it back in later in some form I started making everything that needed disk data call a single function that gets the block from disk. Once I had that though it really just seemed easier to actually implement a basic cache right now, that could be expanded into something for suitable for use by all filesystems later.

My basic requirement was that at all times the cache knows exactly what the contents of a block is and whether it needs to be written out or not. For this reason, I've decided to go with a model where there is only ever a single copy of a block in memory, each with a reference count. That way if one part of the code modifies a block, other parts of the code will see those changes and not have an out-of-date copy. And at all times, the cache can know what's going on.

The cache will maintain a list of dirty blocks and write them out at regular intervals based on some (configurable) criteria. Basically, it'll do all the hard stuff. The filesystem should just be able to say at any time "I need block X" and the cache will sort it out.

To do this I need an efficient data structure to store the blocks. My first thought was a kind of hashtable where without the hashing bit - just modulus/bitmask the block number. We threw it around the office over lunch and did the maths and it turned out that the overhead would be huge. B-trees (specifically, B+trees) looked to be the way forward, so I spent quite a bit of time trying to implement one.

I used to be a CS major, but for some reason I just can't work with proper algorithms and data structures, only wacky hacks. I still haven't been able to make my b-tree work, but thinking about it further I realised that a flat array and a binary search will actually do just as good a job in this case. B-trees really shine when the nodes are stored somewhere where its slow to get at them (on disk). When its all in memory its advantages are much reduced.

Again, my brain conspires against me. It took me about three hours to implement a basic binary search over an array. I'm sorely disappointed in myself - this stuff is supposed to be childs play. At least it works. The basic stuff is in, with reference counting and all the rest. The array is currently eight bytes per entry - four the key int, four for the data pointer. That may go up to twelve if I end up needing a 64-bit type for the key, but the overhead is still minimal. The entries get allocated in chunks (which will probably be configurable), and grow (and probably shrink) the array as necessary.

Tomorrow I'll start adding actually loading the blocks from disk. After that I should be able to start refactoring the handler code properly.

monday, 19 march 2007

posted at 09:37
tags:

Life is completely mad at the moment. I've still had a little bit time to write code, but work is full on and after hours is crazieness too, so I haven't had a lot of time blog. Let see if we can get up to speed.

I'm working to finish off fat.handler. That pretty much means write support. In the last week I've implemented the remaining "read" bits of the handler. File protection bits are mapped as best as is possible, file timestamps are converted and the free space on the volume is reported properly.

In addition to this I've been reading as much as I've been able to find about the filesystem format. Between that work, the official Microsoft documentation and the Wikipedia entry, I think I have a pretty good idea of how things are supposed to work. Its actually a nice filesystem to be learning on - nothing too fancy but enough clever bits to make it interesting.

One thing I'm not understanding too well is the existing structure of fat.handler. I think I follow it, but it really seems to be a quick-and-dirty job for read access. It caches lots of stuff, which is fine - caching is a good thing - but the structures that are used for caching don't really seem suited to writing changes back to disk. The code is also quite spaghetti'd, and isn't well documented at all, which makes for a pretty unpleasant experience.

One of my plans for filesystems is a generic block-caching layer, which will make it so that filesystems won't have to cache blocks themselves at all. So, my intention is to remove all caching from fat.handler and hit the disk every time. This will make things vastly slower, at least initially, but it will let me understand the code enough to implement write support. I can then add caching back later once I have everything clean and documented (and working!).

I think I'll write something soon about those future plans. Basically it'll be a busy year, but my plan is to come out at the end with AROS having rock-solid disk/file stuff. That would make me happy :)

tuesday, 13 march 2007

posted at 14:56
tags:

A couple of screenies before I dash home. I added seperate DOS types for the three FAT types this morning, and updated C:Info and DiskInfo to cope, so we get this:

I'm continuing to cleanup the code in preparation for adding write support. Time for the bus and more code!

monday, 12 march 2007

posted at 21:02
tags:

Got back to work today after a few days off. No bus trip means limited dedicated coding time so I've mostly been writing email for the last week. Its nice to write some code again :)

Also got some movement on the subjects of those emails. On aros-dev I've been trying to get some consensus on the whole packet API issue and what to do with the IOFS system. Opinions ranged from doing nothing to removing it, but noone had really been willing to step up and throw their support one way or the other. I had a good chat with Fabio Alemagna in #aros last night, and we finally managed to establish some direction. Basically, we're in agreement that the DOS API needs to returned proper FileLock structures where necessary (ie according to the AmigaOS autodocs), and the IOFS layer does nothing for us. We ran to time and so didn't quite reach a set of actions or anything like that, but I'm feeling much better about making significant changes to DOS and whatever else should I want to.

Meanwhile, over on teamaros I've been trying to get the bounty description clarified, as per my last post. Damocles came to the rescue, posting a simple "I think Rob is done, lets test and then hand over the cash" which I really appreciated. All going well I should get paid next week sometime :)

The pot has gone up in the last few days too: its now at US$440. I was a little bemused by the fact that people would continue to throw money at something that was already in progress, but my wife pointed out that its very much a cultural thing: Australians will generally take as much as they can for as little as they can get away with, whereas in many other places in the world the value of something is often considered independently of other factors like accessibility. Personally, I'm just humbled by the incredible support I've received while working on this project: bounty donations, blog comments, emails, and so forth. I really do appreciate the support guys, and find it very motivating. Thank you!

Of course I'm not leaving things here. Packets or not, we still need working filesystems. I'm currently reading the FAT spec and will shortly be working on implementing write support. I'm actually quite pleased that such detailed specs are available - Microsoft have a bad rap in this area, but this doc certainly doesn't lack for anything.

Following that, I'll be looking into other filesystems, and possibly revisiting my FUSE work. And since the EFIKA bounty has now been assigned, I find myself with less distractions and newly refocused on getting decent filesystem support in AROS. I hope it lasts :)

friday, 9 march 2007

posted at 18:42
tags:
From: Robert Norris
To: TeamAROS
Subject: Changing the DOS Packets bounty

Hi Team,

I've reached a block on the DOS Packets bounty that I can't work my way
around, so I need this group to help me figure out what to do. Here I
outline the issue and suggest a resolution.

I've taken the existing bounty description to mean full support for DOS
packets on both the device/filesystem side (ie allowing us to compile
and use existing packet-based filesystems from AOS/MOS) and the API side
(ie DOS calls like DoPkt()).

The former is largely complete via packet.handler, as demonstrated by
the availability of fat.handler. The latter I believe to be impossible
to complete without either the removal or a significant redesign of the
AROS-specific IOFS system.

If I'm reading the bounty correctly, then I can't complete it. If the
powers that be (ie the designers and/or advocates) decide that IOFS
should be kept, then the packet API can't be made to work. If they do
decide that it should be redesigned or removed, then its huge amount of
work that I can't complete before the bounty deadline.

On the other hand, if the bounty doesn't include the API, then I'm done,
but I don't feel like thats really fair to the people who thrown money
at this. The expectation was that with DOS packets available, Marek's
filesystems would be ported shortly after. Since that won't happen, I'd
feel weird about taking the cash if people haven't at least got
something close to what they paid for.

So, I propose modifying the bounty so that its clear that it doesn't
include API, but includes a completed fat.handler that supports writes.
This way people who put up cash at least get something tangible at the
end of all this.

Of course something still needs to be done about IOFS, and I'll be
pursuing it further myself, but I think it needs to be outside the scope
of this bounty.

So to summarise, I'd like the bounty to read as follows:

 - ability to compile and use existing packet-based filesystems
 - a working port of FATFileSystem, extended to support writes
 - a porting guide to assist developers porting filesystems

Deadline would remain the same: 31 April 2007.

What do you all think?

Thanks,
Rob.

wednesday, 7 march 2007

posted at 22:33
tags:

The response to the packet stuff has been great, mostly because people can now read their FAT disks. Its exciting!

I have been tweaking a few things since the big release. There was one tedious problem that was preventing NameFromLock() from working correctly, which meant that anything more than a naive file copy wasn't doing the right thing. It turns out that NameFromLock() calls FSA_OPEN with a base handle point to a file rather than a directory like it should be in order to get a handle on the directory the file is in. This maps closely enough to ACTION_PARENT, so I've added code to detect that case and do the right thing. Its a hack, and really should be fixed in DOS, but it'll do for now, and makes this possible:

After a bit of discussion on the mailing list, I've also implemented an autodetection method and so got rid of that stupid "type = packet" thing. Its pretty straightforward - DOS tries to open the handler as a device, and then when it fails, packet.handler has a turn at it. Its naive and has a little overhead (loading the binary twice), but it works nicely.

The next thing to do with packets is to implement the API. I thought this was going to be the easy bit its a nightmare. I'm not going to go into too much detail here - join the mailing list if you want the gory bits. Essentially if you take the required changes to their logical end, it means getting rid of the IOFS system entirely. Now I have no problem at all with this - as I've said before, IOFS really offers nothing compelling over packets - but I think getting rid of it is going to take quite a bit of political wrangling that I'm really not interested in.

Its tricky. The way I read the bounty requirements I really need to get the API side working, but that really can't happen easily, and I'm worried I won't have time for it. But I'm still waiting to see the outcome from the mailing list; hopefully someone will make a decision. I may have to force the issue - that should be entertaining :P

monday, 5 march 2007

posted at 13:29
tags:

Hmm, haven't posted for a little while.

Last night I dropped all of the packet code in AROS SVN. That includes packet.handler, fat.handler and the various core changes needed to make it all work. A few people had been asking to test, and I want some feedback, so it seemed like the right time. I've already had a few questions and feedback which I'm working through, but generally the vibe is good :)

I heard back from Marek about getting updated FATFileSystem code. He sent me a patch with some bugfixes for the version I have, but unfortunately for us isn't able to give us more recent versions (ie write support) as he now has a commercial arrangement for his code. It did give me a little closure though - I now know the direction to take, and aren't waiting anymore. I hadn't realised that it was such a burden, but it was - Saturday was a much happier day because things weren't up in the air anymore.

I started trying to port another filesystem (SFS), and found its still pretty painful, because most handlers want to directly manipulate BCPL strings that they expect to recieve. So, to make it easier I've made packet.handler convert C strings to BCPL strings before passing them on to the packet handler, regardless of whether the AROS core is actually using them. Its introduces a small overhead but is probably worth it to make things easier to port.

Here's my TODO list for packets. It may be extended later, as I find things, but this will do for now:

  • Implement remaining IOFS->packet conversions
  • Fully test write/modify commands. This will require a filesystem that supports them, so I'll need to port another handler, probably SFS. This will also be useful as a "second implementation" to confirm that packet.handler is suitably generic.
  • Write a porting guide
  • Investigate/implement partition auto-detection
  • Complete and cleanup packet->IOFS conversion in dos.library *Pkt() functions

The hard stuff is out of the way, its mostly mopping up now.

wednesday, 28 february 2007

posted at 14:25
tags:

It seems every week there's a new discussion about how to bring a proper browser to AROS. I've seen talk about how to structure the bounty (if we have one), which codebase to use, whether it the source should be closed or open, and so on. Here I give my take on the whole thing, and offer a proposal for how to proceed.

In my opinion the requirements specified by the previous bounties have been seriously mis-stated. The current bounty to port MOS KHTML has a whole $10 attached to it. The bounty for AWeb Lite is better, but its not a browser that is going to meet most of the requirements people have. There's no question that a browser is important, and I think that plenty of people would be willing to drop some cash on it if they knew that they'd get something modern and usable out of it.

The main thing to consider is what the users need. Users don't care how their browser works or what software lies under it. All they care is that they can visit the websites they need and do whatever they can do elsewhere. To that end, I wouldn't mandate any particular codebase, but instead require that the browser support some set of technologies (CSS, JavaScript, XMLHttpRequest, etc) and/or some set of sites (AROS-Exec, Slashdot, GMail, etc).

There's other factors, of course. Ideally such a browser would be in two parts: a library that can be embedded in other applications, and a UI that uses that library. It would be wonderful if the UI was Amiga/AROS specific, meaning that it uses Zune, datatypes, and generally fits the rest of the system.

I also feel pretty strongly about it being open source. I've seen proposals that Sputnik, which is partially closed, be ported, but there's two problems with that. The first is that by offering a bounty for it we're pretty much limiting the bounty to the original developer - no other developer can ever hope to take on the task because the source isn't available.

The other problem is about maintenance. A closed-source browser means that we're beholden to a single developer/vendor for updates. The web is a fast-moving place. To keep our browser up to date and thus still useful for anything a user might want to do, it will need to be updated. It will need patching, particularly for security bugs. It will need new features. Without the source available, we can't have a team of people contributing, so things won't move as quickly as they could, and should the developer decide to abandon the project, then we're screwed.

(There's a certain irony in the fact that a project that exists to provide an open alternative to a closed system that was abandoned by the vendor for a number of years would openly embrace the potential for the same fate for a fairly fundamental piece of technology).

Its worth noting that this if done correctly this bounty could actually end up getting quite large. Not limiting the potential developer base and code base means that outside developers could take this project on.

So, my proposal is this. Close the existing bounties for web browsers, and start a new one. Put money from the previous browsers into that. The bounty requirements will be to produce a browser that has Zune-based UI, can usefully access a number of common sites (listed, including "hard" things like GMail), be reasonably standards conformant (with links to pages that can test this conformance), and has a clean seperation between the engine and the UI.

Note that the browser UI does not need many features. I'd settle for straight browsing - no bookmarks, no sidebars, and so forth. The rendering engine is where most of the complexity is, but there's a few excellent rendering engines available (such as WebKit, KHTML and Gecko), so most of the work has already been done. Features can be added as part of normal development outside of the bounty.

Of course, there's no requirement that one of these engines be used, as long as the result works.

Because the bounty is deliberately light on technical details, I'd recommend that anyone applying be asked to show how they intend to meet the requirements. That would mostly be showing what engine they intend to port.

And if any of the TeamAROS crew are reading and seriously thinking this is good idea, I'd be very happy and willing to act in some sort of sponsor role for this bounty, working with whoever ends up taking it on to make sure the requirements are going to be met and giving them whatever assistance they need. I'm happy to do any and all legwork on this, actually, because I consider it to be of great importance.

On the other and, we can just carry on as usual, and eventually I'll port WebKit and that will be the end of it. I'm already idly working on this. So idle, in fact, that I'm not actually working on it, but I do poke at the makefiles every now and again.

wednesday, 28 february 2007

posted at 13:13
tags:

I mentioned in a comment that FUSE-based filesystems could be fairly easy to port since they're not in the kernel and therefore shouldn't have tight dependencies on the kernel VM subsystem. I've had a chance today to investigate this a little further.

FUSE-based filesystems run as standalone processes. In their setup they pass a single object to the FUSE library that contains pointers to handler functions for each filesystem operation. They then call a mainloop function and let the library dispatch filesystem calls from the kernel appropriately.

To get this on AROS, we'd just need to implement the FUSE library interface. Like the packet stuff, this would be done via a special fuse.handler that would handle conversion from IOFS requests to FUSE calls and back again. It'd probably be a little more complex than packet.handler as FUSE is designed for POSIX operations and semantics, so there'd likely be multiple FUSE calls for certain operations. I don't think that would have to be a huge problem though.

The Type option that I added to the Mountlist would then get the handlers online, eg:

FileSystem = ntfs.handler
Type = FUSE

Many different filesystems are available using FUSE. Most of them aren't particularly useful other than in very specific application domains, but a fuse.handler would immediately give us support for NTFS and ZFS. Those alone are reason enough to do it, so I'll start looking into it.

Packets are basically done anyway. There's a few issues to sort out and a heap of testing, but without packet handlers available I can't do much. I'll probably mostly backburner packets for a little while (maybe a week or two) while I chase Marek, and work on fuse.handler. It'll help the packet work anyway, as it'll give a second implementation of the filesystem proxy model. This should be fun :)

tuesday, 27 february 2007

posted at 12:59
tags:

Now that the filesystem actually works, things have got a little boring. There's still plenty of work to be done, but not much I can test without getting updated FATFileSystem code. I have been thinking about porting some other filesystem like FS1541 or amiga-smbfs, but thats boring too, so its a little difficult to get motivated.

But work must go on. I've implemented (but not tested) a pile more iofs->packet conversions, to the point where there's only a few left. I've also added support for packet systems to expansion.library. Adding FAT to the Mountlist will be a simple as adding an entry like this:

FileSystem = fat.handler
Type = packet

The old StackSize and Priority options for setting up the filesystem task are also back in action. This arrangement sucks a little - I really would have liked the handler type to be auto-detected - but our Mountlist options are different enought to the original that things wouldn't carry over properly anyway. So this will do until either enough people complain or someone proposes something better.

I'm also starting to poke around in the dos.library *Pkt() functions to figure out what's needed to have it just quietly pass packets through to packet-based filesystems. That should be fairly straightforward, but mostly theoretical since nothing on AROS currently uses these interfaces directly (mostly because they don't work). I'll probably write a small program to do some basic file operations using packets and make sure they get handled properly, and that will be it. I also need to finish support for packet->iofs conversion; AROS has this already but its not finished.

I just found PickPacket on Aminet, which looks like it might be useful for testing. I'll have a go at porting it on the way home today.

friday, 23 february 2007

posted at 23:57
tags:

Just got back from bowling and Daytona. Had a couple of drinks in quick succession so I'm a little buzzed. I'll write this and then go to bed, its 1am.

Today I finally found and fixed the last crashing bug - some pointer arithmetic had gone awry, resulting in the stack being trampled. I'm not entirely sure what it the problem was, but a slight tweak fixed it up.

With that gone, I now have stable support for traversing the directory hierarchy and reading and examining files. The filesystem can be browsed via Wanderer, which seems to work fine. Currently using multiview to display files isn't working (tested with both a PNG image and an AmigaGuide doc), and I'm not sure why, but I believe it to be a bug in FATFileSystem itself.

I've contacted Marek to try to get an updated version, but haven't heard back. If anyone reading knows of another way to get hold of him, could you please prod him and see if he got my mail? Perhaps his email address has changed or something, these things happen.

While I'm waiting I'll be working on getting the mounting stuff (dos.library, expansion.library and C:Mount) to know how to setup packet handlers. I'm thinking some kind of simple "proxy" Mountlist option, though I also need to implement AmigaOS options like StackSize and Pri. Shouldn't be hard, just needs a little thought. I'd really love it if the system could just auto-detect the handler type, but I don't see how to make that happen without requiring modifications to the packet handler itself. That rules out binary compatibility with old filesystems, which I want to avoid. So for now, users will just have to set it up in the Mountlist.

tuesday, 20 february 2007

posted at 14:56
tags:

We're in the middle of a heat wave here in Melbourne, as is normal for this time of year. Both Saturday and Sunday were up around 38-40 degrees (celsius), so I couldn't find much motiviation for anything other than dozing on the couch and complaining.

That said, things continue to move. It seems that my typical code style is to write about five lines, then chase a crash for a few hours/days until finally finding a poor assumption somewhere deep in dos.library. All of DOS assumes (probably reasonably) that the AROS-specific fields in struct DosList were being filled out, so when FATFileSystem decided to add a volume to the system (something filesystems that handle removable media can do), it resulted in a corrupted DOS list and a crash on the next DOS operation. A little detection code in AddDosEntry() was all that was needed.

I think I've basically finished the port of FATFileSystem. It had been assuming both that it was running on a big-endian machine and that BCPL strings really were BCPL strings rather than normal C strings that Linux-hosted AROS uses. I rewrote parts of the code to take care of this when running on AROS, and its good now. One more problem removed.

Finally, I've got the basic framework for converting IO requests into packets, and converting their results back again. I'm rather proud of the setup, actually. On receiving an IO request, a new packet is created and the request stashed in dp_Arg7, which is rarely (never?) used. The IO request type and parameters are converted and stored in the packet, which is then pushed to the handler on its process message port. Rather than wait for the reply, the request handler now returns, resetting IOF_QUICK to inform the caller that it will have to wait for a response.

A PA_SOFTINT port gets set as the reply port for the packet, which results in a call to another function within packet.handler that takes the result packet, extracts the original IO request from dp_Arg7, populates it with the results from the packet, and replies to the request message so that the caller can receive it.

All of this means that calls to packet handlers are truly asynchronous if the caller wishes them to be, and also means that we only need two context switches for a packet round-trip. This setup makes it exactly like the traditional AmigaOS environment for packet handlers, and means that packet-based filesystems shouldn't perform any worse on AROS than they do on other systems.

So far I've only implemented a few filesystem requests:

  • FSA_IS_FILESYSTEM (converts to ACTION_IS_FILESYSTEM)
  • FSA_OPEN (converts to ACTION_LOCATE_OBJECT)
  • FSA_CLOSE (converts to ACTION_FREE_LOCK or ACTION_END)
  • FSA_DISK_INFO (converts to ACTION_DISK_INFO)

These are enough to do this:

The "filesystem action type unknown" is in response to an attempt to perform FSA_EXAMINE, which I haven't implemented yet. That should happen within the next couple of hours on my bus trip home.

wednesday, 14 february 2007

posted at 21:44
tags:
[packet] in init
[packet] in open
[packet] devicename 'fdsk.device' unit 0 dosname 'PKT'
[packet] couldn't load fat.phandler
[packet] couldn't load L:fat.phandler
[packet] loaded DEVS:fat.phandler
[packet] starting handler process
[packet] in packet_startup
[packet] calling handler
[fat] starting up
[packet] started, process structure is 0xb7616c48
[packet] sending startup packet

FATFS: opening libraries.
        FS task: b7616c48, port b7616ca8, version: 0.1debug [AROS] (Feb 14 2007)
        Device successfully opened
        Disk change interrupt handler installed
        Initiated device: "PKT"
Returning packet: ffffffff 0
Handler init finished.

[packet] handler fat.phandler for mount PKT now online

Got disk change request
        Disk has been inserted
        Reading FAT boot block.
        Reading sector 0
        DoIO returned 0

        Boot sector:
        SectorSize = 2
        SectorSize Bits = 1
        SectorsPerCluster = 4
        ClusterSize = 8
        ClusterSize Bits = 3
        Cluster Sectors Bits = 2
        First FAT Sector = 256
        FAT Size = 5120
        Total Sectors = 256
        RootDir Sectors = 32
        Data Sectors = -10448
        Clusters Count = 1073739212
        First RootDir Sector = 10496
        First Data Sector = 10528
        Invalid FAT Boot Sector
Done

That there is the output of the moment of truth, where you know that you're on the right track and every is going to work out OK. I had the same kind of magical moment when working on tap.device, where the foundation is in place and the rest is just adding features. Its extremely satisfying.

For the uninitiated, this is the debug output from FATFileSystem as it mounts a ten-megabyte image created under Linux with mkfs.vfat and made available to AROS via fdsk.device. It seems to be correctly reading the image, which means my replacement block code is correct, and the handler is happy doing its own thing.

This comes on the end of over two days of completely depressing debugging work. I've been deep inside LoadSeg(), I've disassembled the handler code, I've looked desperately for any kind of unusual AROS-ness that might be causing gdb to spit up some truly outrageous output, such as the same function appearing multiple times in the stack trace.

The problem was eventually found in AllocDosObject(). This function, among other things, allocates struct StandardPacket objects, which are a kind of wrapper around the normal DOSPacket structure, providing an Exec message as well as the packet itself. The thing it doesn't do is link the packet and the message, so any attempt to access one via the other resulted in a null-pointer dereference and a crash.

I have no idea why gdb was handling it so badly, but even stepping the code before the crash produced the wrong values. Checking the value for a certain global pointer yielded the "base" value of that symbol in the executable itself before relocation, which is what led me down into LoadSeg(). In the end, printing the value showed that gdb was quite wrong, and thus leading me off the scent.

I was really excited when I finally got this all sorted out and got the thing to run. So excited that I very nearly cheered and punched the air in the business meeting I was in at the time. I was bored, and coding excites me :)

Next step is to implement IOFS-to-packet translation in packet.handler. Soon I should be reading actual files :)

monday, 12 february 2007

posted at 12:53
tags:

No huge progress, but a few small things to report.

I've got FATFileSystem building the way I want it. Turns out the whole main() thing was totally unnecessary - its enough to build with usestartup=no uselibs="rom". That vastly simplifies things. Now the loader is simply LoadSeg() followed by AROS_UFC0(). I've also removed all the references to DeviceIO and replaced it with a naive block getter/setter. Its untested, and probably performs woefully, but it should work. All the other bits that had been commented out because of missing things in the AROS headers are now re-enabled, so the driver itself should be ready to go.

Christoph did a little sleuthing for me, and managed to get a new email address for Marcin Kurek. I resent my request for information, and promptly got a reply! He sent headers, and is looking around for source code, which may be lost as he says this quite an old piece of code. My suspicions are confirmed though - its a block caching layer with support for various device backends - standard trackdisk, NewStyleDevice, TrackDisk64 and SCSIDirect. I don't know what most of these are (though I can guess), but they're out of scope for this project. I'll worry about this stuff further when the time comes for writing porting instructions.

I've started re-adding things to the DOS headers to support packets. The first thing I did was put dol_Task back into struct DosList. Upon recompile, AROS segfaults before it starts. Some tracing revealed that struct DeviceNode and struct DevInfo need to have the same layout as DosList, as they need to be happily converted to and from via casting. That's completely braindead, in my opinion, but such is life. Wholesale adding all the missing stuff in one swoop caused no crashes (yet), so I'm guessing thats enough for now.

The next step, which I've just started, is to add two conversion functions to dos.library, IOFSToPacket() and PacketToIOFS(). They really should be internal-only helper functions, except that they'll need to be accessed by packet.handler, so they'll just be documented as AROS-specific and recommendations made to simply use DoIO() or DoPkt() as appropriate. All this may change, as I'm starting to see signs of my current architecture fraying a little at the edges. Not enough that I can put my finger on it exactly, but the warning signs are there. Fortunately most of the code I've written so far will be required no matter what, so I'm not too concerned just yet.

And for the curious, I'm now storing my code in Subversion. Its not everything yet - I'm also making changed to DOS and its headers. Remember that its all extremely fluid, but any feedback is quite welcome.

sunday, 11 february 2007

posted at 10:53
tags:

"I must've put a decimal point in the wrong place or something. Shit, I always do that. I always mess up some mundane detail." -- Michael Bolton, Office Space.

And so it is with me. I consider myself a fairly good programmer, but I always make the most ridiculous mistakes, that usually cost me hours or days in debugging. Case in point: packet.handler creates a process to mimic the environment required for traditional filesystems. The structure of the startup code for these processes is the same as everywhere else in AROS - create the process and have it wait around until signalled by the main (creating process) to run.

For some reason though, whenever my newly created processes called WaitPort() the whole thing would segfault. I chased this around for over two days. Then, in desperation, I started writing a small program to try and capture just the relevant pieces of this setup for testing. In these cases I usually copy by hand rather than doing a real clipboard copypasta, so I make sure my brain sees all the code as I go.

As I was copying, I noticed something that clearly wasn't going to work, so I fixed it on the fly. A few seconds later, my brain kicked in. Sure enough, the same problem appeared there. Same fix, recompile, run. No crash!

The problem? CreateNewProc() returns a pointer to the newly created process. I store this in an (effectively) global chunk of memory. The new process was accessing this chunk of memory to get its process structure, but of course, it was doing this before CreateNewProc() returned in the main process. Invalid piece of memory, crash!

The solution is easy. Have the new process call FindTask() to get hold of its process structure, and all is well.

Avoiding this kind of thing is kiddie stuff for multithreaded programming. I've done this hundreds of times before. Its simple, and thus exactly the kind of thing I screw up.

thursday, 8 february 2007

posted at 22:22
tags:

packet.handler is coming along, but of course I hit yet another obstacle today. The loading mechanism I described previously is fine, except for a fatal flaw: OpenLibrary() loads shared libraries, meaning that I'll only get one instance of the packet handler ever. That would be fine, except that FATFileSystem (and probably every other) assumes it one unique instance per mount - it has global variables.

There are three ways around this that I can think of:

  • Implement my own loader that is basically a copypasta of the existing OpenLibrary() implementation. Duplicating that much code is an awful idea.
  • Hack up OpenLibrary() (or more specifically the LDDemon) to know about packet handlers and treat them specially. I'd feel really nervous about that - packet handlers are hardly "first class" objects like libraries and devices are.
  • Turn the packet handlers back into real processes, rather than libraries. That is, give them a main(), and call them from RunCommand().

I'm taking the latter option. Its sucks a little more for porters, as they have to add more code, though at least its minimal (and again, easily described). Its also a little weird in that it will make the handlers runnable from Workbench/CLI, though they won't do much. However, I'm going to recommend that the handler main() do a little bit of detection, and if thinks its being run by a user, to bail out. I believe a main() like the following should suffice:

void main(void) {
    if (((struct Process *) FindTask(NULL))->pr_TaskNum != 0) {
        printf("this is a filesystem handler, and can't be run directly\n");
        return;
    }

    startup();
}

I've run out of things to write, I suppose because I really haven't made much progress since this morning. Its getting late too, so I'm going to go to bed.

thursday, 8 february 2007

posted at 08:11
tags:

I spent a couple of days beating my head against the AROS dark magic that holds everything together. I got FATFileSystem building, but on trying to call into it with my loader, I'd get a segfault every time I tried to make any kind of function call.

In desperation I stripped back the library to just a single one-line function that printed some (rather unsavoury) text and exited. That worked. The whole thing only fell over when the rest of the files were linked. The confusing part was the they weren't being used - in theory, they should just be random data along for the ride.

A brief rant in #aros yesterday got an answer from Michal (who else?). Apparently AROS has some lovely magic that automatically makes sure your program has a valid declaration and value for SysBase, which is sort of like the global context for the operating system - most system calls (like AllocMem()) actually take SysBase as an extra argument, though this is #define'd away from the user. Its a nice scheme that works well, unless, as was the case here, you have explicitly declared SysBase in your program. In that case, AROS assumes you know what you're doing, and you're expected to set it to the correct value yourself.

I've now surrounded the declaration in a #ifdef __AROS__ conditional, and its loading fine. I don't mind that this feature is there - it makes sense and is useful - but once again, lack of documentation hurts me.

On the topic of documentation, in the last few days I've managed to procure soft copies of both the ROM Kernel Manuals (thanks Hogga!) and the Guru Book (thanks Michal), though the latter is in poor shape being a scan/OCR of the book. Its serviceable though, and makes for interesting reading. I'm hoping to find time to convert all of these to HTML soon, which should make them much more useful.

Back to the code, I've started implementing the loader code into packet.handler. Once thats done, its onto the first of many tricky bits - re-adding things to the DOS headers that were removed (or at least commented) when AROS switched away from packets. Things like struct FileLock, dp_Port, and other excitement. Those will be the first core changes. Yikes!

Oh, and I haven't heard back about deviceio.library yet. I'll have to start trying a little harder. UtilityBase might be a good place to look.

tuesday, 6 february 2007

posted at 09:26
tags:

I'm making good progress, and am quietly confident about success. I can see the next few steps I need to take, which always motivates me, and usually means I'm on the right track.

Yesterday I started porting FATFileSystem to AROS. Of course it won't work, given that packets aren't available, but I'm looking to have something around for me to write the emulation around. The port has not gone without issue - it seems to depend on a deviceio.library, but Google knows nothing about it. I have managed to track down the author, one Marcin 'Morgoth' Kurek, so I emailed him last night asking for more information. My goal in all this is to make porting filesystems as simple as possible, so I want to make sure AROS has everything a packet filesystem might need. Marek Szyprowski's filesystems use this library, and my understanding is that they will be the first filesystems that are ported, so making it available can only be a good thing. For now, however, I've just commented out the code, and a bunch of other stuff too, mostly related to missing DOS fields and structures in AROS. My goal is compiling, not working.

As far as I'm able to tell, traditional packet handlers are basically normal programs except that they use a function called startup() instead of main(). I can only assume this means there's a specific loader somewhere in dos.library to load and run them. I had planned to write a loader of my own in packet.handler to do this, but it proved to be more difficult to get the thing to compile than I'd anticipated. As I was investigating, I came up with a better idea - make the handler a library, with the startup function as a single vector entry point. This is easy to realise - all thats necessary is to add AROS build files (mmakefile.src, lib.conf, a header to define the LIBBASE structure) and then change the entry point, from this:

void startup(void)
{
    ....
}

to this:

AROS_LH0(void, startup, LIBBASETYPEPTR, LIBBASE, 5, FSHandler)
{
    AROS_LIBFUNC_INIT
    ....
    AROS_LIBFUNC_EXIT
}

Its a small requirement, but one that is easily described in a porting doc, so I'm happy with it.

Then, in packet.handler, the "loader" becomes something as simple as:

    handler = OpenLibrary("DEVS:fs.phandler", 0);
    AROS_LVO_CALL0(void, struct Library *, handler, 5, );

It'll be a little more complex, as it will have to setup a process and such, but thats the basic idea. Work will start on the loader on the way home this afternoon, now that FATFileSystem is compiling :)

monday, 5 february 2007

posted at 22:28
tags:

Last night a faint pink line was the bearer of wonderful news.

I'm going to be a dad again :)

sunday, 4 february 2007

posted at 22:52
tags:

I don't feel like I've got much to write, since I've spent the weekend just reading code and getting more and more confused, but Tom_Kun (of AROSAmp fame) told me to just write about the confusion and bemoan the lack of documentation. Sounds at least at interesting as what I usually write about, so I accept his challenge :)

I've given alot of thought to how to make packets happen. Going back to pencil-and-paper design, I came up with a block diagram that had IOFileSys and packets operating "side-by-side" so I started digging into the code.

I seem to have a weird sixth sense that fires when I'm coding something wrong. I usually can't point to exactly where its wrong, but I've learnt to trust that sense. In this case, it fired, and I could work it out. The new system needs to be able to allow IOFileSys commands sent via DoIO() still reach a packet-based handler if appropriate. This means DoIO() accepting the command, translating to a packet, then calling DoPkt().

The problem? DoIO() is in exec.library, while DoPkt() is in dos.library. Thus, Exec gains a dependency on DOS. Thats wrong.

This forced me to look deeper, so I went into the AROS port of AmiCDROM, the CD-ROM handler. Both AmiCDROM and SFS were ported by adding a IOFileSys-to-packet translation layer to the handler itself. This model seems reasonable, so I've changed tack. I'm going to try and build a "generic" packet.handler that can load and wrap packet-based handlers.

The model is pretty straightforward, and I plan to copy and cleanup code from AmiCDROM/SFS to get it running. But now I have to deal with the problem of getting the handler online. I figure its something loadable, like a device or library, so I've dug deep to find how to do this. As far as I can tell, I want a combination of LoadSeg() and CreateNewProc() with the NP_Seglist tag. Mount and the Mountlist also need some extending so that you can specify to use the packet handler as well as a real handler (unless some sort of auto-detection can be done), but I think thats the way forward, at least as a first implementation.

The hardest part of all this is that I have barely any examples of how the packet layer is supposed to work. I've learnt heaps, but I'll really have no way of knowing if its right until someone tries to port a filesystem to it. I hate working with so many unknowns.

There's a massive documentation void in AROS - I can muddle through the code, but there's not a lot of commentary, and what is there is often vague or unhelpful. I'm going to turn this around at least in my corner - this project will have good comments and full higher-level documentation that explains how the whole thing hangs together.

Hoping to write a little code tomorrow, haha.

friday, 2 february 2007

posted at 22:53
tags:

As you probably expected, I've finally applied for the DOS packets bounty. While is hasn't technically been accepted yet (since the Team AROS mailing list is having some issues), I'm working on the assumption that it will be accepted, and starting work accordingly.

Deliverables are as follows:

  • Major updates to DOS such that it can accept either packets or IOFileSyst commands and either pass them through to the filesystem if it is of the same type, or convert to the other type first. Similarly, the responses will be passed or converted as necessary.
  • A console based tool that can issue both packet and IOFileSys commands to DOS. Mixing both command types should work seamlessly. This will be my primary testing tool, and so is the first piece I'll be working on (already in progress). I expect that I'll have to extend it throughout the project.
  • A working port of Marek Szyprowski's FATFileSystem. This is the one that Michal sent me last week, that I have permission to release under the APL (and thus include in the AROS source tree). Its packet based, so the aim is to require minimal actual porting work. Its read-only, and perhaps I'll add write support at some stage, but thats for another project, and isn't included here.

I won't be implementing every packet listed in dos/dosextens.h, but finding a balance somewhere between every packet I can find documentation for and doing just enough to get FATFileSystem running. Its more important that the foundations are in place rather than every obscure feature is implemented.

The target date I set myself is 30 April - three months from now. Its probably about right. At my current work rate it feels conservative, but I do have a tendency to assume things are easier than they turn out to be, so hopefully its about right. Of course I'll keep blogging with my progress.

In other news, earlier today I picked up a 120GB Seagate Momentus hard drive and tonight got it running in my laptop. Thanks to my mad Linux skills, no reinstall required. I got a 2.5"-3.5" adapter, hooked the new drive to my Windows desktop machine and booted up a Linux Live CD. Dropped the laptop to single-user and remounted the drive read-only, and then, with the help of a crossover cable (since my only hub is 10 megabit), did:

# cat /dev/hda | ssh -e none -c blowfish root@192.168.0.2 "cat > /dev/hda"

After a couple of hours the entire drive image had copied, so a brief jaunt into Parted resulted in a much larger version of my standard filesystem. And I left a spare 10GB on the end in case I want to do some gaming and/or try some kind of "alternate" operating system ;)

wednesday, 31 january 2007

posted at 18:57
tags:

I've been interviewed by Paul J. Beel's "The AROS show". Just more of my usual ranting, I'm afraid, but it was fun to be asked - Thanks Paul!

Quick status update: I have the PuTTY core compiling, and nearly have the plink frontend ready. I still have to write the other AROS specifics, including the network layer. I'll write more soon - tonight I'm playing cards with some friends :)

monday, 29 january 2007

posted at 08:50
tags:

Sheesh, you step out for a couple of days and people start hassling you to write (thanks Christoph ;)

Anyway, I've sort of backburnered filesystems for a little while. The work is still interesting, but its hard to hit a moving target, which is what this is until all this DOS packets stuff is resolved. I'm still waiting on an email from Michal with his assessment of the situation, so I want to wait for that before planning my next move.

In the meantime, I've started looking into a port of PuTTY. So far I've got the core building, to the point where the link fails because all of the platform-specific functions aren't there. All thats left to do now is implement them.

I'm starting with plink, which is roughly equivalent to the UNIX ssh - does the protocol, but no real terminal smarts, leaving that to the calling console. Writing (or borrowing) a full-blown terminal emulation will be required, but for instant gratification I want to see a remote login first.

One thing that even the command-line tools need is a way to accept a password without displaying it. The normal AROS console.device doesn't allow this, so I've implemented an extension to the FSA_CONSOLE_MODE IO command that allows echoing to be switched on and off. My original plan was to extend the DOS SetMode() function to make the echo toggle (easily) available to user programs. It currently only recognises 0 and 1 as valid inputs, so by using a higher-numbered bit, we could just use that call. I asked for feedback on this idea, and Fabio Alemagna responded positively, but pointed out that a PuTTY port could possibly form the basis for a new console.device that has a full terminal emulation in it (ala xterm).

I think this is a great idea, as the standard console seems quite limited. In interesting twist, if we had a really great console, then the need for PuTTY is removed somewhat, as something like OpenSSH can do the trick. On the other hand, the PuTTY code is much cleaner, so I'd be inclined to use it, but not port the putty tool itself (though a GUI session manager is still possible).

If we had a better console.device, then we'd also need an API to drive it - something like termios on UNIX, but with a more pleasant interface. So until I have an idea of what to do here, I'm not going to extend SetMode(), because I don't want to make a new interface that will become legacy if a new console interface appears. So for the moment, plink will do a IOFileSys call for FSA_CONSOLE_MODE directly. Its a little more unwieldy, but I think its the right first step.

Oh, and by the way, a couple of people have already mentioned KingCON. Sounds great, but without source, I'm not interested (and if there is source, a quick Google doesn't find it). Remember that AROS is also an educational experience for me - porting software is far less satisfying than writing it myself.

thursday, 25 january 2007

posted at 21:29
tags:

Michal just sent me a read-only FAT32 driver that was written by a Marek Szyprowski for MorphOS. Its DOS packet based. So now I have to choose - do I continue writing my almost-from-scratch filesystem for the learning experience? Do I take this one and convert it to the AROS filesystem API, and then add the write functions? Or do I implement the DOS packet interface (and then add the write functions).

My questions about the packet interface stand, and I've just sent a long email to Michal seeking guidance. Meanwhile, I'll fiddle with something else. Right now I'm working out why resolve crashes under hosted. Hardly glorious work, but worth doing :)

thursday, 25 january 2007

posted at 09:12
tags:

Michal Schulz informed me of the existence of fdsk.device, which is basically a loopback device - a way to mount filesystem images.

The interface is a little unwieldy, but its quite usable. I've put vdisk.device into mothballs for a while, though I did have it very close to working. I might bring it back to life some time in the future, or at least extend fdsk.device with some of the ideas I have. Its not a high priority for me right now, as the point of this whole exercise was to build a filesystem.

I've started looking through the FreeBSD msdosfs code, to try to get a feel for it. Amazingly I'd forgotten just how bad POSIX code can be - the Amiga interfaces really are pleasant to read and use. Anyway, I've pretty much decided that trying to get the raw BSD files to compile and be usable is going to take at least as much effort as writing the filesystem from scratch, cutpasting the useful bits, so I've settled on the latter. In theory it'll produce cleaner code, possibly at the expense of some stability. I'm not bothered - I learnt the hard way that readable code beats just about everything. You can fix bugs and stabilise things later, but if you can't read it you don't stand a chance.

All this means I've lost some time last night and this morning, so I've only stubbed the startup code for the handler, but I'm hoping in a few days I'll be able to read a floppy image.

wednesday, 24 january 2007

posted at 08:31
tags:

My brain isn't in the right place for SDL hacking right now. It seems pretty doable, but I'm bored. I keep thinking about filesystems, so thats where I'm going to focus my efforts for the moment.

AROS is going to need an implementation of FAT16/FAT32, if only to use USB keys when USB support appears. So my intent is to port msdosfs from FreeBSD.

Regardless of whether DOS packets or IO win the day as the filesystem interface of choice, something I am going to need is some sort of virtual disk device so I can use real filesystem under hosted. This idea is the same as what's used in virtualisation software everywhere - you have a big opaque chunk of disk that the virtual machine treats as a real piece of hardware.

This one doesn't look too hard to implement. All devices are roughly the same, so most of what I learned from tap.device should apply here. The initial goal is to mount a CD-ROM image using cdrom.handler - only read magic required :)

tuesday, 23 january 2007

posted at 21:36
tags:

tap.device is pretty much finished. There's a few little things that need to be added, but my motivation is gone on it now - I'll just add bits as they're requested. Its in the nightly builds, so its pretty much just a matter of waiting for feedback now. So I'm now turning my attentions towards my next project.

I've been looking at the bounty list, and the two most easily attainable for me are probably "SDL graphics and input drivers" and "DOS packets" (the EFIKA port is also interesting, and quite lucrative, but I'd have to buy an EFIKA board first, and I'm cheap).

To do DOS packets, however, requires a pile of technical knowledge that doesn't seem to exist outside of books long out of print. I've asked a few questions about it on the AROS list, but haven't had any real reply yet. Even if I was convinced of the utility of this stuff (which I'm really not yet), I wouldn't be able to do it anyway.

So, as I wait, I'm looking into doing a SDL backend. Just playing at this stage, but I expect I'll have an idea in a day or two of whether or not I can do it, and if I can, how long it will take. All going well, I'll apply for the bounty and hopefully make a little pocket money :)

monday, 22 january 2007

posted at 10:27
tags:

Finished off the stats tracking code this morning, which gets the thing into a usable state, so its time for a release! Checking into AROS SVN as workbench-devs-tap-unix. It'll build by default, so it'll be in the next nightly build, after which time I expect to get a bit more feedback.

Made a release announcement over at AROS-Exec, and made a little screenshot too, running the MSS Snug web server. A screenshot of a network driver in action is fairly pointless, but it was asked for, so who am I to question it? :P

There's still a few things to do before I can leave this project and move onto other things. The biggest one is removing the requirement to run AROS as root. I'm currently digging around in the QEMU code, which appears to both use TUN/TAP but is not root (or even setuid root). If thats no good, i've got a couple of ideas, so I expect something to happen soon.

saturday, 20 january 2007

posted at 13:55
tags:

Someone in #aros was saying that AROS has no goals and no directions, and that he couldn't support it based on that. I didn't comment at the time, but I've been thinking about it a bit and I've decided I agree with him, except that I think its a good thing.

Leaving games aside for a moment, how often do you actually have fun just using your computer? Windows and Unix systems are about work, not play (this includes Linux). Every new application is aiming big, trying to be "professional" and "enterprise-grade". And often they do a good job of it, but at the cost of having no soul.

The Amiga, on the other hand, is for play. Thats not to say its not possible to do serious work with it, but look at its history. It arrived at a time when computers were for home, not for work. They were for hobbyists, not professionals. You'd sit down and experiment, see what you could make the computer do, and with a bit of ingenuity, you could do quite a bit. You used it to create, rather than process.

The best example of this? Paint programs. Windows ships with MSPaint, a cute little freehand drawing tool. It doesn't do much, but pretty much everyone has used it at least once just to play - drawing a house, a boat, or just random graffiti.

As far as I know, neither Windows nor Unix has a "serious" freehand drawing program. If they do, they're not well known. That type of program is derided as a toy, while tools like Photoshop (GIMP) and Illustrator (Inkscape) control the field - both designed for serious processing work.

The Amiga, on the other hand, is well known for programs like Deluxe Paint and TVPaint, and more lately, Lunapaint. In terms of features and complexity, these applications are "serious" - they're not toys. But, they're aimed at artists - people producing digital art just for the sake of it.

The point I'm trying to make is the thing that distinguishes the Amiga (and thus AROS) from other systems (save perhaps the Mac, though I'm not familiar enough with that system to comment) is that Amiga is for the artists, the musicians, the inventors, the creative folk, where others are for the white-collar workers, the processors, the business types.

Now don't get me wrong, these are important jobs, and someone has to do them. I think that the creative types have lost out as computers have hit the mainstream and become merely tools to be used to get a job done rather than an end in themselves. I think I've felt this for a while, though I couldn't have articulated it until now.

And thats were AROS comes in. AROS can provide a way for computers to be fun and interesting and sexy again. So in a way no goals are required, because the very act of building the system is the point - if AROS was ever considered finished, then we've either lost our way or it isn't needed anymore.

None of this means AROS has to be a toy. If I had to set a goal, it would be to build an operating system that can take advantage of every piece of hardware in my computer and every last cycle of computing power to make me want to just play with my computer. I'd say its already well on its way.

In the last week I've enjoyed working on the AROS code more than any other code I've worked on in the last four years, since about just before I did the first jabberd stable release. It lets me stretch, try things out without worry about doing it "wrong". It rewards me when I get it right but leads and teaches me when I get it wrong. The codebase, like the system it implements, is optimised for fun.

To anyone looking to make AROS into a "serious" operating system, while I wouldn't discourage you, I would say tread carefully. Don't remove the soul from the system in your efforts to make it like the "big boys". We need a fun and creative system like AROS. What we don't need is another Windows or Unix clone - they're quite good at doing that on their own.

saturday, 20 january 2007

posted at 10:48
tags:

Does this mean anything to you?

rob@plastic:~$ ping -c5 192.168.30.2
PING 192.168.30.2 (192.168.30.2) 56(84) bytes of data.
64 bytes from 192.168.30.2: icmp_seq=1 ttl=255 time=13.0 ms
64 bytes from 192.168.30.2: icmp_seq=2 ttl=255 time=12.9 ms
64 bytes from 192.168.30.2: icmp_seq=3 ttl=255 time=12.9 ms
64 bytes from 192.168.30.2: icmp_seq=4 ttl=255 time=12.8 ms
64 bytes from 192.168.30.2: icmp_seq=5 ttl=255 time=12.9 ms

--- 192.168.30.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3999ms
rtt min/avg/max/mdev = 12.824/12.948/13.074/0.147 ms

:)

I was right, and the problem wasn't my fault. I got an email this morning from one Georg Steger (who has a finger in every pie) who read my last blog entry, and based on my rather vague description was heroic enough to track down a bug in UnixIO, and sent me a patch. Applied, and its a ping frenzy - I've sent ~10000, without issue. Thanks Georg, you're a legend.

While others were fixing bugs for me, I got to spend some time refactoring large chunks of code and adding various error checks and other stuff. Its now at the point where I think I've got a pretty solid and clean codebase to build all the other needed pieces on - stats tracking, broadcast/multicasts, and so forth. The hard stuff is done, it should be pretty plain sailing from here!

friday, 19 january 2007

posted at 09:19
tags:

I'm unbearably close to having this working. I found the problem that I described yesterday. I was using soft interrupts to have UnixIO signal event readiness. As far as I can tell, the interrupt handler is called more-or-less directly by whoever triggered the interrupt, meaning that the my handler was running in the UnixIO select loop task. My handler calls back into UnixIO after the last write is done to disable write events. I'm not exactly sure, but I think I understand - UnixIO isn't reentrant, and so the lists got corrupted.

So that was a bit annoying, as it seemed so close, but I had to backpedal. The answer is to switch from soft interrupts to signals, but using signals requires that I have a task to signal, so I reworked the code to have TAP IO handled by a separate task. It sits and waits for UnixIO to poke it, and then reads or writes as appropriate. It took me ages to get it going, mostly because I spent two hours tracking down a stupid crasher that resulted from my own inability to read.

This morning I finally got it working (for some value of working). It successfully responded to three packets, the initial ARP request and two ICMP ECHOes (ie pings), before hanging. The last thing I had chance to check before getting to work was to see where the hang is. Its stuck somewhere in timer.device, called from AROSTCP, looping over a list that points back into itself. I'm not quite sure yet how to track this one down. I figure it'll be me not setting some value properly in one of the AROSTCP requests, or more likely, not locking something properly before changing it. Horrible horrible problems in both cases. I'm not sure how I'm going to find it, but I'm sure I'll think of something.

I just wish it would work. Its kinda demoralising - its been almost working for three days now, but just not quite. A few people have asked about it, so there's instant glory as soon as I'm done, which I want now :P

It certainly is a wonderful way of learning the system though. I know this for certain - UnixIO is a great concept, but the code is horribly crusty and disgusting. It needs quite a bit of work, which I might do sometime (ie added to TODO list).

thursday, 18 january 2007

posted at 07:46
tags:

Work continues apace. Yesterday tap.device received and decoded its first packets, and I happily watched as AROSTCP issued write commands in response. This morning on the bus, the first packet was responded to. Some output:

[tap] [0] got a packet
[tap] [0] packet dump (42 bytes):
0x000000  ff ff ff ff ff ff be 21  7e 9b ce 97 08 06 00 01  
0x000010  08 00 06 04 00 01 be 21  7e 9b ce 97 c0 a8 1e 01  
0x000020  00 00 00 00 00 00 c0 a8  1e 02                    
[tap] [0] source address: be:21:7e:9b:ce:97
[tap] [0] dest address: ff:ff:ff:ff:ff:ff
[tap] [0] packet type: 0x0806
[tap] [0] broadcast packet
[tap] [0] found a request that wants this packet, sending it
[tap] [0] packet copied successfully
[tap] in begin_io
[tap] CMD_READ
[tap] [0] queued read request
[tap] in begin_io
[tap] CMD_WRITE
[tap] [0] queued write request
[tap] [0] waiting for write events
[tap] [0] ready to write
[tap] [0] buffer has 28 bytes
[tap] [0] packet dump (42 bytes):
0x000000  be 21 7e 9b ce 97 2e 2e  22 89 d7 0a 08 06 00 01  
0x000010  08 00 06 04 00 02 2e 2e  22 89 d7 0a c0 a8 1e 02  
0x000020  be 21 7e 9b ce 97 c0 a8  1e 01                    
[tap] [0] source address: 2e:2e:22:89:d7:0a
[tap] [0] dest address: be:21:7e:9b:ce:97
[tap] [0] packet type: 0x0806
[tap] [0] wrote 42 bytes

Thats the debug output from the driver as it receives an ARP who-has broadcast from Linux, and sends a reply. tcpdump was kind enough to show it:

08:02:42.663596 arp who-has 192.168.30.2 tell 192.168.30.1
08:02:42.675941 arp reply 192.168.30.2 is-at 2e:2e:22:89:d7:0a (oui Unknown)

So we're extremely close. I've got a bug at the moment that is only really allowing one packet to be sent by AROS before it gets stuck somewhere deep in the kernel and consumes all of my CPU. Its to do with trying to disable write events when I've got no more packets to send - there's no point having UnixIO wake me up every second to tell me I can write if I have nothing to write. It seems to be causing some kind of interesting race condition inside the kernel's signal stuff. I'm not sure yet if its a bug or a limitation of UnixIO, but I'm sure its possible to fix, so my next step is to print unixio_class.c and study it for a while.

tuesday, 16 january 2007

posted at 07:44
tags:

Made some excellent progress yesterday. Turns out that only code built into the kernel can access the host OS, so I have to make use of a HIDD of some kind. But then I found the UnixIO HIDD. Essentially it exposes Unix file access to AROS applications. Since all I do is file operations on /dev/net/tun, it will work nicely.

Late last night I got tap.device as far as detecting that packets were being sent. I thought I'd add a simple packet dumper before bed, because its only an extra couple of lines of code - read then print. And then something truly horrible happened. Turns out the the UnixIO API doesn't have a method for reading data from a file. It has one for writing, but not for reading.

This is truly bizarre. I can only guess that it hasn't been required thus far. I went to bed rather annoyed, and this morning poked around for an alternative - I wondered if maybe the data was being sent along with the "ready to read" event. Sadly, no dice. So on the bus trip this morning I implemented a ReadFile method, which is working very nicely. Once again, I'm impressed at how intuitive the code is - in under an hour I'd learnt what I needed and got it working.

I'll write some tests for it today (mostly just extending test/unixio) and check it in tonight. I'm not sure what the etiquette is for changing something rather core to the whole system. I haven't broken anything, so I think I'll just check it in, tell #aros, and then deal with any fallout (though that seems unlikely). They gave me commit bits, I intend to use them :P

saturday, 13 january 2007

posted at 12:40
tags:

So I got a stub driver done, and this morning instructed AROSTCP to use it, but it failed in startup - couldn't find tap.hidd. Sure enough, the file wasn't in my install, so I tried to build it manually, but that failed too. Considering that I can't find anything anywhere (within AROS or via Google) that uses it, I suspect its just another victim of bitrot. So my job just got more exciting - now I have to resurrect that too.

I think I'm not going to bother though, and instead have tap.device talk to the Linux TUN/TAP layer directly. Its designed to mimic a network card, so why would you want to have anything other that a network driver talk to it? And if there's only going to be one thing talking to it, why not integrate them and get rid of a pile of complexity?

I can't help but wonder what it was for in the first place. Once I get some Subversion access I'll look through the history and see where it came from.

friday, 12 january 2007

posted at 20:38
tags:

Merry year, etc. I didn't write a great deal of code over the break, opting to trounce Dark Samus instead. But now I'm back at work, which means a few spare laptop hours each day, mostly while on the bus. I've already forgotten everything I was working on, so of course, its time for something new.

In days of old, I was an Amiga fanboy. I would've liked to be more, but I had nfi what I was doing, and couldn't afford the necessary s3kre7 t3ch to do the really awesome stuff. I did spend many long hours in AMOS Pro, but I think the real heroes would tell you that doesn't count.

Amiga is mostly dead and gone now, but I still have fond memories. At various times in the last few years I've stumbled onto AROS. A few clowns thought it'd be great to reimplement AmigaOS from scratch, and who am I to argue? I remember when I first saw it it was complete pants, but I tried it again in 2005 and found it to be quite impressive. I started playing with the code then, even porting JJFFE (sadly, that code is lost). As usual, I got sidetracked later (I think on some interesting MUD code), and forgot about it.

Anyway, I rediscovered it again a couple of days ago and grabbed the code. It didn't compile, which turned out to be the build system wrongly assuming that a particular header file was always in the same place. On Debian, its not. But in the course of finding and fixing the problem, I got to look at the code again, and was reminded of just how interesting the system is. So AROS has been selected as my next plaything.

I've joined the lists and posted my build patch, and last night dropped into #aros. Talking about possible projects, someone suggested a decent telnet/SSH client would be useful, immediately making me think to port PuTTY. First of course, I need network.

As far as I can tell, AROS has working network/TCP support, but only if running in native mode (ie running direct on the hardware). I'm not particularly interested in that mode of operation, preferring the Linux hosted mode - saves dicking around with partitions and rebooting and whatever. Unfortunately there doesn't seem to be a network driver for hosted mode, so I decided my first project was to make one.

I know from past fiddling that the TUN/TAP drivers are the way to realise this. Basically it creates an network interface that is attached to a process rather than a physical bit of hardware. With the right setup it sends and receives raw ethernet packets. So my thought was to learn it, then work out how to integrate into AROS.

I wrote taptest.c to get a feel for how things should work. Armed with that, I set about building a AROS hardware driver.

Since AROS is trying to be portable, its device drivers don't talk directly to hardware. Instead, there exists Hardware-Independent Device Drivers (HIDDs), that essentially abstract the underlying physical hardware. Then, the device drivers (eg graphics.device) talk to the HIDD to do the work. It makes sense, but the importance was completely lost on me as I charged ahead, copying one of the network card drivers and gutting it to do my bidding.

After a little while I started thinking about how to make a AROS binary use facilities provided by the host when it hasn't the faintest clue that there exists something called a "host system". Then it dawned on me that I'd need a HIDD that talks to the Linux TUN/TAP driver, and then a network driver that talks to it and implements the SANA-II interface.

So now I had two things to write. Digging around, turns out there's already a tap.hidd, but a corresponding tap.device is nowhere to be seen. I can't imagine what the HIDD could possible be for on its own, and nothing in the source tree seems to reference it, but I'm not above stealing it and using it.

So there, I've said basically nothing, but at least you now know. The one thing I am finding is that this project is fun to hack on, something I've really been missing in the last little while. And there's heaps to be done still, so I shouldn't be lost for ideas. Hopefully I can hang around here a little longer than most other things.

friday, 20 october 2006

posted at 21:58

Just popping up for air quickly to show you what I've been working on: an isometric tile engine in a web browser.

My intent is to use this as my interface for goodnight, the LoM/DDR clone I mentioned previously. Work is very slow, but I'm starting to actually get the interfaces fleshed out properly.

And by the way, Inkscape is incredible. Compare the original mountain terrain graphic from DDR with the fancy SVG-based version made with Inkscape's "Trace Bitmap" feature. I converted all the 17 DDR terrain graphics to SVG and thence to PNG during my hour morning bus trip, and they look fantastic.

friday, 15 september 2006

posted at 07:23
tags:

So a couple of weeks ago I got the chance to chat to Damian about IO::Prompt and my completion patch. While he rejected the patch because the interface was sucky (and I agree), he accepted my offer to take on maintenance duties for the module. Thats not it for completion though; we're currently designing a much better completion and history interface, which I'll write more about that some other time. My first trick will be to get a test suite up and running.

IO::Prompt doesn't currently have a test suite, and I'm not confident that I'll be able to make any significant changes without breaking whats there, so the current functionality has be recorded. The difficult thing about it is that we're testing something terminal based, so we have to pretend to type something, and then watch not only what the API returns, but also watch what appears on the screen.

This turns out to be quite complicated. The module opens /dev/tty directly, both for reading and writing, so we need to intercept the calls to open (via CORE::GLOBAL::open) and returns some filehandles we can manipulate directly. My first cut used basic scalar handles, but then I ran into further trouble when I found that the module uses -t to see if its talking to a terminal. Obviously my scalar handles are not terminals, so I needed a way to convince -t otherwise.

After a deep tour into the guts of Perl itself (a fascinating and scary place) I determined that there's really no pleasant way of overriding -t, though there is a patch under consideration for 5.10, and I did figure out a really evil way that might do it by twisting the optree in ways that I wouldn't dare give to the world. So the only other option is to somehow produce filehandles that are in fact terminals.

IO::Pty provides the answer, by allowing me to get pseudo-terminals from the operating system. I kinda didn't want to go there, because it ties the implementation to systems that have terminals, which doesn't include Windows, but I've since decided that it'll be fine for now since the current code hits /dev/tty directly, and that doesn't exist on Windows either.

Time passes. I play with this module, figure out the difference between master and slave and make a note of it because its stupid and I can never remember, and finally produce Test::MockTerm. Its not CPAN-ready yet, its currently a build helper for IO::Prompt, but I think it may have a life of its own someday. Using it, I write some basic tests for IO::Prompt, and run it .. and it hangs, waiting to read from standard input.

After further perusal of the code, it seems that IO::Prompt only reads directly from /dev/tty when the -tty or -argv options are specified. Otherwise, it arranges to read from standard input. However, it does this not be simply using the STDIN handle, but by using the first file in ARGV, and if that doesn't work, trying to open - (using the single-arg scalar-and-filehandle form of open). I think (more testing required) Damian did it this way because STDIN may have been redirected away from wherever ARGV and - pointed initially.

This presents an interesting problem. I now need to arrange for opening - to actually cause my pseudo-terminal input handle to be used instead. But, I've already overridden open, and you can't have multple overrides, so I need some kind of multiplexing/dispatch thing to figure out which open replacement to use.

Except I don't. I've just now had a good idea. What if you specified /dev/tty explicitly on the command line as the input source? Wouldn't we want that intercepted also? And isn't that in the scope of what Test::MockTerm should do? The answer is yes. I'm going to modify my code to look for /dev/tty in the one-arg form of open, as well as to look for - and use the same handles. That should take care of it. Epiphany!

So thats where I'm at for now. This has been an incredibly challenging project so far, and I haven't actually written any real tests yet! I intend for this code to be released in IO::Prompt 0.99.5 or 0.99.6, depending on how long it takes.

friday, 1 september 2006

posted at 15:53
tags:

Damian dropped in on Monday to word us up on the finer points of VIM insanity. So now I have an editor that can do anything, one way or another. And if it can't, Perl can, and the two of them work well enough to get things done. As a test, I implemented an insert-mode LDAP lookup which uses Perl to do the LDAP stuff, leaving the editor bits to VIMs internal scripting language.

The Perl interface for talking back into VIM itself (eg reading and setting variables) is pretty horrendous, but I've already written a proof-of-concept for a module that will make life far more Perly, eg instead of this:

VIM::Msg(VIM::Eval("a:arg"));

we can do this:

print $a{arg};

I know which I prefer :)

When and if I get this module into some kind of usable state, it'll go up to CPAN as VIM::Sane.

I also deleted my entire .vimrc and .vim/ and started again. Its now clean and nicely commented, and makes me happy. I finally know what I'm doing :)

I also got a chance on Monday to talk Perl with Damian at lunch, and got an answer to my Class::Std problems as well as lots of IO::Prompt related stuff. But I'm out of time now, will talk about that more later.

sunday, 27 august 2006

posted at 15:21
tags:

The Midnight clone I keep talking about (and I am getting around to writing about, really) is actually a port to Perl of a Java application. My initial goal was to get the thing working, then make it more Perl like.

One interesting pattern that is used throughout the original is special classes that act only as enumerated types - they simply create a bunch of instances of themselves with specific names that act as constants. These classes are known in Java as "typesafe enumerations".

So I needed something to replace them with. My first attempt was to simply create empty packages with a pile of use constant declarations in them. This worked well enough, but the original enumeration classes had additional features - they needed to stringify properly (via a toString method), they needed to compare equal only to themselves (something use constant can't do, since its constants resolve to just plain scalars) and the constants can be objects with methods declared on them.

Now I don't know if all these extras stop the classes are "correct" by some theoretical definition of what a typesafe enumeration should be (if such a definition exists), but there's no denying that this stuff is useful, and Perl has always been about practicality over usefulness. Plus, I've started to think of places where I could use something similar in other code of written - pretty much anywhere that a variable could have a limited number of possible values, like state variables.

So I implemented something generic and useful, that I'm now quite proud of: Class::Constant. It makes it dead simple to declare a basic C-style enumeration, but has enough wackiness to do some really crazy things. I'll talk a bit more about some real examples once I start writing about my project, real soon now. Until then, this is just highly ineffective advertising :)

sunday, 20 august 2006

posted at 14:27
tags:

I gave Daniel a demo of the Midnight clone the other day, and one of the first things he did was to try and do a tab complete in the little command-line interface. It was immediately obvious that any modern command-based interface needs completion and history, so I set out to find a way to provide it.

A quick CPAN search didn't really uncover anything. A couple of the Term::Readline variants claim to have support, but the interface seemed rather clunky (reasonable, since it comes from C). I use and love IO::Prompt, largely because of its trivial interface. The answer became clear - IO::Prompt requires tab completion.

The whole thing took three evenings to implement. It was pretty straightforward. I started by adding support for a -complete option, which took a list of possible values. After that it was just a case of hooking up the tab key, comparing the current input to the values in the list, and replacing part or all of it with the matched value. This worked wonderfully well, and did great things for my confidence - I'm always a little unsure if I'm doing something the correct way when I go to work on someone elses code - particularly when its written by a known hero like Damian Conway :P

Adding the characteristic "beep" when you only get a partial match was next. Trivial, of course - just emit ASCII code 0x7 at the proper time; the terminal takes care of the rest. A -bell option went in alongside - I'm a firm believer in being as flexible as possible.

The next hallmark of tab completion is displaying possible values when there is more than match for the current input. Since I already knew about the available matches, its no effort to print them out, but it wants to look nice too. A little column calculator went in to make things pretty. I also added a "show all <N> matches?" prompt when its likely that showing them all will scroll your terminal. Obviously, getting a prompt is no problem (this is a prompting module, after all :P ), but I also found that prompt() is totally reentrant - it doesn't restore the previous terminal settings when it exits, opting instead to return it to "cooked" mode. I haven't looked in any depth, so I don't know if IO::Prompt or Term::ReadKey is at fault here. Either way, it caused tab and other keys to not be detected correctly after the "show all" prompt. The workaround was to simply chuck the terminal back into raw mode, and it coped nicely.

So that just about finished it, but then while writing examples I started to realise that the whole thing was actually pretty useless. The reason: the vast majority of command line inputs are actually a set of individual and sometimes unrelated fragments. Think about completion for files. If I had to provide an array of every possible file that could be chosen in a given situation, I'd have to provide a list of every file on the system, each with full path. Obviously, this is ridiculous. What's wanted is a way to complete only portions of the input line, with the possible values selected by looking at the surrounding context. More than a humble array can achieve. What was needed was a callback.

I made it so that -complete could handle a coderef as well as an array - the simple array code might still be usable in places, and its certainly easier to understand. I figured it would be enough to simply pass the current input to the callback, and have it look at the contents and return a list of possible values based on it.

This worked, but had problems. The callback code was pretty complex, and when there were multiple possible values, displaying them was awful, because my code only knew how to complete entire input lines, not fragments of lines. So the callback would have to return the full input line with each possible outcome. Perhaps an example: we're writing a program that has to load files, using a "load /path/to/file" command. We want to do shell-style completion for the file/path portion of the input. To work correctly, the callback has to look for the "load " at the start, then split up the file path, look inside the directory, and return any files there. But, it has to return the full input line, so it returns something like:

  • load /path/to/file/foo
  • load /path/to/file/bar
  • load /path/to/file/baz

As well has having piles of redundant information, if my code were to display them, it would show the entire lines, when it should have just shown foo, bar and baz (just like your shell). Obviously, IO::Prompt needed to be smarter - it had to understand fragments and do the splitting itself.

This actually took me about two days of thought to figure out - the bus to work is a great place for pondering. The solution was to have prompt() split the input and pass all of those to the callback, and only do completion on the final item in the split. So in the above example, the callback would return qw(foo bar baz), and thats that. A -split option was added that takes a regex to be passed to Perl's split builtin.

Implementing this took quite a bit of internal gymnastics to achieve because I was having to essentially write the same code for two cases (full line and split fragments). Generalising aspects of the code (mostly the list matching and the inpup replacement code) was proving quite finnicky, until I made the leap of logic that told me I'd made the correct choice with the interface. If we're trying to complete for a full line, that line becomes a split with no delimiting character, and one fragment.

Five minutes later, it was done. And the callback is trivial. The most complex callback you'll ever likely write for this sort of thing is one to do file paths, because of the special cases - there's files and subdirs. I included that in the examples. It weighs in at just eight lines.

All in all, a roaring success. Damian now has the patch. Haven't heard back yet, but I'll see him next week so I'll hassle him then if necessary.

saturday, 19 august 2006

posted at 14:18

Back in November 2005, I was learning alot about Javascript. I'd used closures (essentialy anonymous functions) in Perl for years without thinking about it, but the way Javascript scopes variables makes them not work quite the same, so I was suddenly forced to think about closures in more detail.

I had this on the old website and would probably have been content to let it die there, but for two things Daniel did this week:

  • He pointed the Rails IRC channel at it, and I just can't ignore that kind of exposure.
  • He dared me to make function currying work in C, and the two are related enough that they should be nearby.

So consider this use of closures in Perl:

@c = ();
for($i = 0; $i < 10; $i++) {
    my $x = $i;
    push @c, sub { print "x is $x\n" };
}
$_->() for @c;

Or the equivalent in Javascript:

c = [];
for(i = 0; i < 10; i++) {
    var x = i;
    c.push(function () { print("x is " + x) });
}
for(f in c)
    c[f]();

The C version would look like this:

#include <stdio.h>
#include "closure.h"

int main(int argc, char **argv) {
    closure c[10];
    int i, x;

    for(i = 0; i < 10; i++) {
        CLOSURE_INIT(c[i]);
    }

    for(i = 0; i < 10; i++) {
        CLOSURE_START(c[i]);
            int x = i;
            printf("x is %d\n", x);
        CLOSURE_END(c[i]);
    }

    for(i = 0; i < 10; i++) {
        CLOSURE_CALL(c[i]);
    }

Compile and run:

% gcc -Wall -ggdb -o closure closure.c
% ./closure 
x is 0
x is 1
x is 2
x is 3
x is 4
x is 5
x is 6
x is 7
x is 8
x is 9

closure.h provides the magic:

#ifndef __CLOSURE_H
#define __CLOSURE_H 1

#include <ucontext.h>

typedef struct closure {
    ucontext_t  enter;
    ucontext_t  exit;
    int         inside;
} closure;

#define CLOSURE_INIT(c) \
    c.inside = 0;

#define CLOSURE_START(c) \
    getcontext(&c.enter); \
    if(c.inside) { \
        ucontext_t __closure_end;

#define CLOSURE_END(c) \
        swapcontext(&__closure_end, &c.exit); \
    }

#define CLOSURE_CALL(c) \
    c.inside = 1; \
    swapcontext(&c.exit, &c.enter); \
    c.inside = 0;

#endif

I hope to be able to have an implementation of currying/partial application soon - got some great ideas already :)

friday, 18 august 2006

posted at 16:12
 

Lately the code I've been writing for work and for play has been doing more and more interesting things by means of subtle and clever gymnastics. I've felt the need to write about them. This is where that writing takes place.

My approach to technical problems, particularly when software is involved, is to first think "it can't be that hard." I won't be told that I can't do something - a computer can do just about anything, given enough time (both programmer and run time). Whether you should do something is another story, but it never hurts to speculate.

Conversely, I like to be challenged. I like it when someone says "you know, you should make ," for insane and/or impossible values of X. Thats also an open offer for you to throw up a challenge, by the way :) The other thing is to take everything here with a large dose of tongue-in-cheek. I have an opinion about everything, especially the things I don't know about. I'm probably wrong, and I'm content :)

What wackiness do you do with the computer? Send me some email: rob@cataclysm.cx

friday, 18 august 2006

posted at 16:12
tags:

Welcome to WordPress. This is your first post. Edit or delete it, then start blogging!

I have a feeling they meant for me to delete that text, but it adds a certain stale canned goodness that merely using an off-the-shelf blogging package with one of themes it ships with just can't provide.

Cough.

Anyway, this is the fast track for me to be able to write a little about some of the insanity that I've been working on. I'll be spending the next few days writing up some recent journeys through the depths of IO::Prompt, overload and Class::Std, talking about the Lords of Midnight clone I'm working on, and bringing "Closures in C" back from the dead.