Tuesday, December 17, 2013

The outsourcing question

I run a web development business, and am always engaged in a question about how many of my supporting services should be contracted out or done myself. And for what I don't do myself, who I can trust to deliver that service reliably to my clients. And what to do when that service fails.

This is not an academic debate this week for me.

On Sunday, my server-hardware supplier failed me miserably. On Friday, I notified them of errors showing up in my log related to one of my disks (the one that held the database data and backup files). They diagnosed it as a controller issue and scheduled a replacement for Sunday early morning. So far so good. It took longer than they had expected, but it came back and seemed to check out on first report, so I thought we were done. It was Sunday morning and I wasn't going to dig too deep into what I thought was a responsible service providers' area of responsibility.

On Sunday evening, Karin (my business associate at Blackfly) called me at home (which she normally never does) to alert me that the server was failing. By that point, the disk was unreadable, so we scheduled a disk replacement and I resigned myself to using my offsite backups, which were now a day older than normal because the hardware replacement had run over the hour when the backup schedule runs normally (why didn't I manually run it after the hardware "upgrade"? yes).

That server has been much too successful of late, and loading all the data from my offsite server was much slower than I'd anticipated (i.e. 2 hours), and then running all the database restores took a while. To make it worse, I decided that it was a good opportunity to update my Mariadb (mysql) version from 5.2 to 5.5. That added unexpected extra stress and complications to it (beware the character configuration changes!!!!), which I can mostly only blame myself for, but at least I suffered for it correspondingly with lack of sleep.

But then on Monday, after sweeping up a bit, I discovered that not only had the hardware swap that was done on Sunday morning not addressed the problem and made it much harder by postponing what could have been a simple backup to the other disk, they actually swapped good hardware for older hardware of lesser capacity - in other words, the response to the problem had been to make it considerably worse. I had a few words with them, I'll give them an opportunity to come up with something before I shame them publicly.

Now it's Tuesday morning and and the one other major piece of infrastructure that I outsource (DNS/Registration, to hover.com) is down, has been for the last hour.

In cases like this, my instinct is to circle the wagons and start hosting out of my basement (just kidding!) and run my own dns service (also kidding, though less so). On the other hand, the advantage of not being responsible is that it gives me time to write on my blog when they're messed up.

Conclusion: there are no easy answers to the outsourcing question. By nature, I take my responsibilities a little bit too close to heart, and have a corresponding outlook on what healthy 'growth' looks like. Finding a reliable partner is tough. It's what I try to be.

Update: here's an exchange with my server host after they asked when they could schedule time to put the right cpu back in, and asking me whether they wanted to keep the same ticket or a different one:

Me:

Thanks for this. I don't care if it's this ticket or another one. Having a senior technician to help sounds good, and I wonder if you could also tell me what you plan to do - are you going to put back my original chassis + cpu or try to swap in my old cpus into this chassis? Or are you just going to see what's available at the time?

The cavalier swapping of mislabelled parts after a misdiagnosis of the original problem points to more than a one-off glitch, particularly in light of previous errors I've had with this server - it sounds to me like a you've got a bigger problem and having a few extra hands around doesn't convince me that you've addressed it. 

What I have experienced is that you are claiming and charging for a premium service and delivering it like a bargain basement shop.

Them:

We will check available options prior to starting work during the maintenance window. 
 
We are currently thinking we would like to avoid the old chassis in case there are any SCSI issues and move the disks to another, tested chassis. As an option, we could add a CPU to the current server. 
 
If you have any preference on these options, we will make it the priority. 
 
I apologize again for the mistakes made, and the resulting downtime you have experienced. 


Is it just me, or did they just confirm what I was afraid of?

Saturday, October 19, 2013

Me and varnish win against a DDOS attack.

This past month one of my servers experienced her first DDOS - a distributed denial of service attack. A denial of service attack (or DOS) just means an attempt to shut down an internet-based service by overwhelming it with requests. A simple DOS attack is usually relatively easy to deal with using the standard linux firewall called iptables.  The way iptables works is by filtering the traffic based on the incoming request source (i.e., the IP of the attacking machine). The attacking machine's IP can be added into your custom ip tables 'blacklist' to block all traffic from it, and it's quite scalable so the only thing that can be overwhelmed is your actual internet connection, which is hard to do.

The reason a distributed DOS is harder is because the attack is distributed from multiple machines. I first noticed an increase in my traffic about a day after it had started - it wasn't slowing down my machine, but it did show up as a spike in traffic. I quickly saw that a big chunk of traffic was all of the same form - a POST to a domain that wasn't actually in use except as a redirect. There were several requests per second, and each attacking machine would do the same request about 8 times. So it was coming from a lot of different machines, making it not feasible to manually keep adding in these ip's into my blacklist.

It certainly could have been a lot worse. Because it was attacking a domain that was being redirected, it was using up an apache process, but no php, so it was getting handled very easily without making a noticeable dent in regular services. But it was worrisome, in case the traffic picked up. It was also a curious attack - why make an attack on an old domain that wasn't even in use? My best guess is that it was either a mistake, or a way of keeping someone's botnet busy. I have heard that there are a number of these networks of "zombie" machines, presumably a kind of mercenary force for hire, and maybe if there are no contracts, they get sent out on scurrilous missions to keep them busy.

In any case, I also thought a bit about why Varnish wasn't being useful here. Varnish is my reverse-proxy protective bubble for my servers (yes, kind of like how a layer of varnish protects your furniture). The requests weren't getting cached by Varnish because in general, it's not possible to responsibly cache POST requests (which is presumably why a DDOS would favour this kind of traffic). To see why, just imagine a login request , which is a POST - each request will have a unique user/pass and the results of the request will need to get handled by the underlying CMS (Drupal in my case).

But, in this case, I wasn't getting any valid POST requests to that domain anyway, so that made it relatively easy to add the following stanza to my varnish configuration:

 if (req.http.host ~ "example.com" && req.request == "POST") {
   return (lookup);
 }

And indeed, now all the traffic is bouncing off my varnish and I'm not worrying. In case it was a domain that was actively in use, I could have added an extra path condition (since no one should be POST'ing to the front page of most of my domains anyway), but it would have started getting trickier. Which is why you won't find Varnish too helpful for DDOS POST attacks in general. As usual, the details matter, and in this case, since I was being attacked by a collection of mindless machines, the good guys won.

Wednesday, August 07, 2013

Confused by online payment processing? You're not alone.

In the old days during "polite" conversation, it was considered rude to talk about sex, politics, religion and money. You might think we're done with taboos, we're not (and I'll leave Steven Pinker to make the general argument about that, as he does so well in The Better Angels of Our Nature).

The taboo I'm wrestling with is about money - not how much you make, but about online payment processing, how it works, and what it costs. In this case, I think the taboo exists mainly because of the stakes at hand (i.e. lots of money) and the fact that most of those who are involved don't get much out of explaining how it really works - i.e. the more nuanced communications are overwhelmed by sales-driven messaging, and the nuanced stuff is either proprietary secrets or likely to get slapped down by the sales department.

In other words, if you want to really understand about online payment processing because you want to decide between one system and another, you're in for a rough ride.

Several years ago I wrote a couple of payment processors for CiviCRM, and more recently I've been working on a new version of one of them. At the same time, two clients have recently been trying to understand their existing payment processor services in order to integrate those processes into their CiviCRM installations. So this is my "Payment Processor primer for CiviCRM administrators" blog post.

What You Need To Know


Here's a simplified but useful model of what's happening. A typical online payment has three phases, and each phase may be the responsibility of a different (or the same) service provider. I'm talking about a typical real-time transaction via credit card - other flavours will introduce new complications.

Phase 1: The Form


The web form is the public interface where the visitor inputs things like a name and credit card number. Sometimes, it's a two part form. Depending on the transaction, you'll want this form customized so that your visitor doesn't get confused and leave. The "depending" bit is really about your visitor's relationship to you. If they already know and love you, it probably matters less. If they're new and not yet convinced they want to give you money, it's big.

CiviCRM can support the form, but also supports payment processors that insist on doing the form themselves (e.g. paypal standard). The big advantage to CiviCRM doing the form is customization and not-alarming-or-confusing-the-visitor (e.g. the paypal form allows credit cards, but many people get to that form and bail because they think they need to sign up for a paypal account). The big disadvantage is that you need to worry about your server and something called PCI compliance, which is another topic.

Phase 2: The Transaction Processing or The Payment Gateway


This phase starts after the visitor clicks the submit or confirm button and may happen entirely in the background, or may involve the visitor in supplementary forms and clicking. This phase is the responsibility of a "Payment Gateway", a service that you have to buy unless you're a large corporation that builds their own. This payment gateway service has business contracts and software relationships with the phase 3 section. The key services they provide are to abstract away the individual details of the different card company interfaces and to take responsibility for financial compliance stuff (e.g. they need to keep those credit card numbers very, very safe ...).

CiviCRM does not try and do this, but provides interfaces to many payment gateways and in theory allows you to write an interface to any payment gateway that publishes some kind of "API" or programmer interface. It can be confusing because many payment gateways also try to be in the business of providing Phase 1 services (e.g. 'hosted forms') and it may not be obvious if there is such a thing as an API, and sometimes they call it something else.

An excellent, more technical description is here on wikipedia:
http://en.wikipedia.org/wiki/Payment_gateway

Phase 3: Transaction Completion


This is the murkiest phase, where the payment gateway, the institution behind the card (issuing bank or card association) and the "merchant account" all exchange information and some kind of electronic trail gets laid that eventually results in money being transferred from the card holder to the "merchant account". The "merchant account" is a special kind of bank account that is enabled for credit card payments. What makes it special is that the credit card companies have a noose around it's neck - i.e. they take a chunk of money before it gets to the account, and reserve the right to take the money back if there's a problem. The "merchant account" might be directly owned by you the site owner, or it might be owned by someone else who then passes the money to you.

It's not unreasonable to confuse this phase with phase 2 since they happen together, and since phase 2 without phase 3 is kind of pointless, but it's important to separate it out in terms of costs and responsibilities. Phase 2 is really a technical and business relationship service that is handling the details of the transaction (kind of like an electronic teller, or maybe the hired gun in a drug deal). Phase 3 is where the money ends up and is accounted for (the backroom settling of accounts ...).

It's also important to separate them out because you can have phase 3 stuff going on without phase 2. For example: a 'recurring transaction' where a donor says they'll give you money every month. Once the initial deal is sealed, the subsequent transactions don't need to go through phase 1 or 2 (but might anyway).

What You Actually Get and How Much it Costs


So the challenge of comparing various payment processor "solutions" is to figure out the apples and oranges. With CiviCRM you have to buy services from at least one company in order to get online payments from credit cards, but any company you find may offer a mix of services covering these three phases. Paypal standard will only give you a soup-to-nuts end-to-end solution. A merchant account will only get you the last phase and you still need a payment gateway service. If you don't have an ssl certificate for your site, you will need phase 1 services, but if you do have ssl, you probably don't want phase 1 services. Most payment gateway services will offer to bundle in a merchant account and/or a hosted payment form service. And each of these offerings will be better than the competition for reasons x, y and z. And each one will use a different vocabulary to describe what they are giving you.

So, here's what you should expect and look at:

1. Phase 1 services. You probably shouldn't pay anything for this unless they are providing really good customization - it should be a one-time and change fee for the customization of the form. Getting CiviCRM to host this form is usually a better idea unless you're doing cheap hosting and/or can't get an ssl certificate.

2. Phase 2 services. These will typically cost a monthly or yearly fixed fee, sometimes a set up fee, and probably always a per-transaction fee. There's no particular reason that there should be a % fee, since the costs of providing these service are basically per transaction + a setup and account maintenance, unless the company is trying to do some kind of gambling, which is stupid.

3. Phase 3 services. The merchant account part of the service is really about paying off the card mafioso, plus an extra handling charge to the bank. Each major card has it's own rates that they charge the bank, but MasterCard and Visa are similar and Amex costs about an extra 1%. If you're a small-medium organization, then you'll probably pay a pretty standard amount, but if you're a large organization, you can usually negotiate a better rate, which is just going to reduce the extra amount that the bank charges you. It'll never go below the industrial rate, which is complicated (i.e. it fluctuates and depends on lots of things), but I'd hazard lives at around 1.5% (why not? for a start, consider all those points reward systems out there and put that together with a certainty that card companies aren't losing money ...).

One thing that this model helps you do is to compare the bundled services, which will typically be the monthly or yearly + per transaction costs of the payment gateway plus the % costs of the merchant account. You can sometimes see how they gamble on the % costs and give you a single 'blended rate', and sometimes gamble on the size by shifting costs back and forth from per transaction to % rates.






Friday, June 21, 2013

Drupal and file permissions challenges when using selinux


Twice now I've run into this class of problem and so I'm documenting it here for my future self and anyone else with a similar problem.

Most recently, a server I manage was generating a rather baffling error, seemlingly randomly

Warning: file_put_contents(temporary:///.htaccess) [function.file-put-contents]: failed to open stream: "DrupalTemporaryStreamWrapper::stream_open" call failed in file_create_htaccess() (line 498 of /[documentroot]/includes/file.inc).

Baffling because apache (and pretty much any other process on a linux server) has access to read and write to the /tmp directory, and extra baffling because the file was there, created.

It seemed to be mostly when editing, but not uniquely. After doing a stack trace, I discovered this about file management in Drupal:


  1. As a security measure, Drupal checks for an .htaccess file in all directories it writes to.
  2. That includes the temporary directory [which is good, because sometimes that directory is inside the web document root].
  3. Which means it's going to write a .htaccess file to your /tmp directory, if you use the default temporary directory setting in unix.


All that is well and good unless you're running selinux, which this server is. In this case, it's also using fcgi, which means the selinux rules are a little less standard and prone to issues.

Conclusions:


  1. When you've got confusing file permission errors, check the /var/log/audit directory. If you don't know what I'm talking about, check http://wiki.centos.org/HowTos/SELinux
  2. The key for this error was looking at the .htaccess file with the ls -Z command. The -Z option tells you about the extra selinux file settings.
  3. To fix my version of the error, i used this:


chcon -v --type=httpd_sys_content_t /tmp/.htaccess

i.e. changing the selinux "type" solved it.

Saturday, June 08, 2013

Blame or Responsibility? Point the finger!

Would you rather get blamed, or held responsible for something?

When something bad happens, I notice that there often replies about the importance of taking responsibility and frequent rebuttals about not pointing the finger or blaming. But hold on, what exactly is the difference?

According to Wikipedia (for example), blame can be defined as the act of holding responsible. Certainly, in usage, you'll see that blame is usually given, and responsibility is more often taken, but I'd say those are just tricks of language -- I can accept blame for myself and hold others responsible just as well.

So I'd like to stop pretending that this is a real difference. You may have some clever way of distinguishing between them, but for the average person, the only difference is one of implicit value (responsible = good, blame = bad), and that really doesn't help us at all when it comes to public debate or private argument.

Okay, so I'm not so naive as to think it's all a textual misunderstanding that I'm going to fix with some clever logic. I recognize that there's a good reason that we have these two words - for example, people's lives and livelihoods frequently depend on the allocation of responsibility/blame, and in most real life examples, assigning blame/responsibility is not a question of fact, but of interpretation, so suddenly power and politics and - more often than not - money is also involved. Having different words with different values allows for some clever social engineering.

But what I'd really like is to stop seeing comments with the subtext "it's wrong to blame". We make mistakes, and people with a lot of power need to be careful when assigning responsibility, but that shouldn't stop us from asking important questions about why bad things come to be, thinking about how to change them, and making tentative suggestions about the way forward.

In the media, it's done all the time with politicians and other leaders, and we consider that fair game, so let's not pretend only evil people are to blame. We all make mistakes, so can we be willing to take the blame? It doesn't have to be the end of the world.

Go ahead, point the finger. Maybe you're wrong, and let's just start by saying that's okay. Assigning blame can be the start of a beautiful conversation if you're not afraid to say or hear it.

And, if you're still reading, next on my wish list is: severe financial penalties for politicians who make knowingly false statements to the press. Doug Ford, I'm pointing my finger at you. Stephen Harper, watch your step.

Wondering about the donut? It's my credit to Earl Miles and his angry donuts.

Friday, April 19, 2013

TD Canada Trust and Online Security


For the past few weeks I've been unable to access the TDEasyweb site. Today I discovered that it's because "TD made a corporate decision to only support Windows and Mac".

I have a few problems with this. Personally, it's a hassle because I can no longer use their easyweb site unless I go borrow someone else's computer. This seems like an anti-security measure. It's extra insult because of the way it was not communicated responsibly.

I have a bigger problem because the response I got when I talked to a manager was that the only way of dealing with it was to write to a customer feedback email address. And the reality is, if not enough people complain, then they won't do anything about it. Basically, treating my issue as one of personal preference, rather than one of technical choices and security.

But on-line security is not at all a matter of personal preference. If a majority of users decided they didn't want as many security precautions as they've got, then would that mean you should remove them? I don't think so, and I don't think anyone else does either, but that might be what you'd get if you held a vote.

To add insult to injury: people don't use Windows just because "they prefer it". Most people's technical choices are governed by a much more complicated ecosystem of supply and capitalism and monopolies and corporate choices and evolving technologies.  And Windows computers are responsible for most of the worlds security issues - for lots of reasons. So TD's decision is reinforcing the serious internet security issue that we already have.

More specific to this issue - what actually happens when I try to use their system is that I get a "Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data." So that means, in implementing their new 'security' measure (presumably as a result of the Denial of Service attacks last month), I'd guess they have decided to filter incoming requests based on the user agent, and to only accept those that are in their "support" options. This sort of makes sense because it excludes old Windows IE users, which it should, but it's a terrible way to solve their problem, which it doesn't, because that's only a small part of the problem.

[Update later today: I've had no reply from TD, but a simple experiment shows me that my guess above was correct, i.e. they're filtering the incoming traffic based on the user agent string. I used the standard development tool with Chrome to use the IE9 user string, and voila, it connects. So my personal problem is easily fixed this way, and anyone else who's using a 'non-supported' platform. I'm really not impressed ..]

TD Bank Technical Department: I have written to you, please reply.

Thursday, April 04, 2013

Tax Havens

I've been working with Canadians for Tax Fairness since they started a couple of years ago, and it was extremely satisfying to see them in action during the current media frenzy around the tax haven data leak. Last December we created an issue specific campaign site about tax havens, and although it hadn't taken off, I'm hoping it's going to get a little bump now.

While I was waiting, I checked out google webmaster and noticed that the campaign site had been getting a search traffic increase over the past week or so, and I guessed that it was related to journalists searching who were in on the leak, preparing their stories. I was delighted to see our campaign site sitting at number 7 for the search term "tax havens".

Then it occurred to me to check out google trends to see what they had to say about the search term, and I offer you the following info graphics from them. I thought the geographic one might be especially illuminating, in particular showing that Australia is surprisingly interested. What's not evident is why they're interested - a nation of investigative journalists, or a a population desperate to hide their cash from the taxman?

Another curiosity - why is this only in Anglo-American countries + India? Then it dawned on me ... I'd have to look for the translation to get any traffic from say, Russia. After trying and failing to figure that out, I did manage this search trend for Cyprus.

Tuesday, March 12, 2013

The Real World of Website Requirements

Do you want to talk to me about developing a website? Here's what I need in a nutshell.

A website is part of the Internet. The Internet is a tool for electronic communications. That's all it is. Really. Everything else is just about how it does that, which is also important.

So before we talk about anything else, the most important things are:

1. What is the content you are communicating? i.e. what are you saying?
2. Who are you communicating with? i.e. who is the intended audience and are there privacy issues with this content?
3. Who is the author, authority, source, etc. of the content? i.e. who's responsible for the content, who's going to write it, who's going to change it.

If you start the conversation about modules, how it looks, iframes, menus, or anything else, I will always get back to these questions, so do us both a favour and think about these first, write them down and email them to me.

If you're wondering about the title of this post, you'll want to see this:

http://www.cbc.ca/ideas/massey-archives/1989/11/07/1989-massey-lectures-the-real-world-of-technology/

Monday, February 25, 2013

Responsive design and colour in web development

I'm not a web designer. If you've worked with me before, you're probably tired of me saying that. Funnily enough, in high school I took art, and always considered myself artsy by inclination, if not vocation or personality.

On a recent project I ended up doing more design work that I'd planned, which happens. I learned about two new things from this process:

1. Responsive web design using Zen grids. It's kind of funny to be back using grids like the old table layout world of pre-2000. But it's now sane and zen grids is one way to keep up with the cool kids doing 'responsive design', which just means your site looks good on all kinds of devices, (yeah, just like html was supposed to by original design, grumble, grumble).

2. Mac colour vs. PC colour. I've know about the different experience of PC users vs. Mac users for a while, but have tried to ignore it (claiming, truthfully, that I am in fact colour blind, though not very). On one project, the designer delivered his mockups with pdfs and after I implemented them, discovered that all my colours were wrong, because he'd done his work on a Mac.

On this project, I was working with Anne-Marie doing some bare-knuckle colour tweaking and was really confused because she was loving a colour scheme that I thought was pretty ugly. After putting it down to my colour blindness getting worse, one day she phoned up telling me it WAS pretty ugly, just not on her Mac. Since it was late in the process, I did a little research one weekend and came up with a quick way of making the site colour css on a Mac different from the one on a PC (i.e. anything non-Mac). The way I did this was based on this page:

http://www.w3.org/TR/1998/REC-CSS2-19980512/colors.html#gamma-correction

What I now understand about colours and computers is this:

The way we define colours for CSS (i.e. an RGB triplet) is an abstraction - a point in a three dimensional space (0..255 in each dimension), which is then turned into a colour on our monitor. That's all nice and clean, and I remember from physics that colour is just a wavelength, so this all sounds pretty reasonable. But here's where it gets tricky: the mapping of that three dimensional space onto actual colours is based on old CRT monitor technology, basically converting each RGB value into a voltage for the corresponding phosphor colour (with some changes since the phospors aren't actually red green and blue). So we're mapping our colour space into a collection of colours that is defined by a electro-mechanical process, a process which can't actually produce all possible visible colours. Which is why it should and sometimes is called sRGB - i.e. there's no canonical universal logical map from RGB numbers to real colours, only a convention invented by Microsoft et al based on machines that are largely obsolete.

So Steve Jobs and his designers had a problem - they wanted to expand the available colours, but the sRGB standard wasn't actually able to show them, even if his machines could. Kind of like trying to play piano music on a bad cassette tape, it's just not possible because some frequencies can't be reproduced. The way they solved it was by keeping the RGB idea, but changing the way an RGB colour was presented on their machines, by taking those RGB values and increasing their "intensity" on display. Which is why a given website looks brighter and more intense on a Mac machine, even though most PCs are capable of displaying those values as well.

So the answer to my problem was: take the colours in my css file, and perform a simple mathematical operation (the "gamma" exponent) and resave those colours in the original css file, keeping the old ones in their own css file ("mac.css") that only gets loaded for mac machines (using some simple javascript in this case, though you could use other mechanisms I suppose).

You might think this surely must be re-inventing the wheel, since this problem has existed for some time. It might be, but I couldn't find that wheel anywhere. I suspect this is a problem that gets just wished away mostly, and mostly it doesn't seem to matter if you pick your colours well (Mac users are just used to a different experience). Maybe our colours on this site were just borderline enough that it did matter (what does borderline mean? I think it means close to the edge of the available colour space).

And of course, if you're mixing and matching css and png files as part of your design, you need to pay attention to a whole different issue of consistency which is what most of the discussion about mac/pc colour differences seems to be focussed on.

Comments welcome.




Tuesday, January 08, 2013

Democratic activists: engage your representatives

For a long time, some kind of "postal code lookup" tool has been the holy grail of e-activism. I wrote such a tool that sent faxes to MPs back in the early 2000s [the aughties?].

But here in Canada we ran into a problem: postal code to riding databases are compiled by Stats Canada and licensed under restrictive use. So in spite of various attempts to come up with a sustainable solution, they've mostly been ad hoc and fail in the long run because of the cost and effort of keeping that database up to date.

So I'd been deflecting new requests for such a tool for years, hoping someone else would solve it, until a year ago or more two of my clients said they really wanted such a tool, and it occurred to me that geo-coding had now evolved to the point where we could use a different strategy: instead of keeping a database of postal codes to ridings, we could do geocoding of addresses to latitude - longitude and then use the now freely available shape files of ridings to do the lookup. Bonus for this method - we can do lookups with partial addresses that don't have a postal code.

Fortunately, I dallied in getting started and before I could do any code, I was referred to this site:

http://represent.opennorth.ca/

which basically does all this already, and provides a machine interface for external use.

So that meant I could focus on the interface issues and leave the technical lookup to an open data web service.

But to make matters even nicer, along came this module:

http://drupal.org/project/webform_civicrm

which solves a lot of the interface problems.

All of which brings me to what this post is all about -- I've just released the 6.x-1.0 version of the new drupal module:
CiviCRM Represent Open North Integration

And that means you can now enable and encourage your CiviCRM constituents to contact their representative with a simple interface that feeds into and back from your CiviCRM database, making the process as simple as possible for them. Try this as an example:

http://www.taxfairness.ca/action/how-many-tax-dollars

In fact, it's even a little nicer that you see here, because you can generate emails to your constituents with CiviCRM "tokens" so that just clicking on the link in their email will take your constituent to a version of the form that has all their information pre-filled. Yes, it's almost bordering on zombie activism, but I mostly think that the less technical distractions the better, and it doesn't get much better than this.

Did I mention that this works not just for federal MPs but most if not all MPPs and even city councillors for some big cities?

[Update, March 29, 2013: I've just published a Drupal 7 version with more documentation]