diff options
author | Julio Capote <jcapote@gmail.com> | 2018-11-06 03:03:41 +0000 |
---|---|---|
committer | Julio Capote <jcapote@gmail.com> | 2018-11-06 03:03:41 +0000 |
commit | 4b489a049a0063bbb1fd9f0c0f74ce1ee9f87a86 (patch) | |
tree | 98af5707e30150af482e297bed9cd4e9b5477e6a /content/post | |
parent | a62a3e7755579d93ce3a87243dd277575930fffe (diff) | |
download | capotej.com-4b489a049a0063bbb1fd9f0c0f74ce1ee9f87a86.tar.gz |
import old posts
Diffstat (limited to 'content/post')
20 files changed, 1166 insertions, 1 deletions
diff --git a/content/post/2008-10-11-tabbing-through-fields-vertically.markdown b/content/post/2008-10-11-tabbing-through-fields-vertically.markdown new file mode 100644 index 0000000..639cbb3 --- /dev/null +++ b/content/post/2008-10-11-tabbing-through-fields-vertically.markdown @@ -0,0 +1,26 @@ +--- +layout: post +title: "Tabbing through fields vertically" +date: 2008-10-11T01:54:00Z +comments: false +permalink: /post/54058512/tabbing-through-fields-vertically +categories: +--- + + + +Sometimes it’s useful to switch the browser’s default tabbing behavior (left to right) to the opposite (top to bottom) when your input fields are in a grid layout instead the of the usual single column layout. Having to do this manually is a real pain, especially for large grids; So here is a solution in javascript, using mootools: + +```javascript +window.addEvent('domready', function(){ + var trs = $$('#mytable tr') + var accum = 0 + trs.each(function(tr, trindex){ + accum = trindex + 1 + tr.getChildren().each(function(td, tdindex){ + td.getChildren('input')[0].tabIndex = accum + accum = accum + trs.length + }) + }) +}) +``` diff --git a/content/post/2008-10-12-arrow-key-navigation-for-text-fields.markdown b/content/post/2008-10-12-arrow-key-navigation-for-text-fields.markdown new file mode 100644 index 0000000..6fcf882 --- /dev/null +++ b/content/post/2008-10-12-arrow-key-navigation-for-text-fields.markdown @@ -0,0 +1,75 @@ +--- +layout: post +title: "Arrow key navigation for text fields" +date: 2008-10-12T17:41:00Z +comments: false +permalink: /post/54266325/arrow-key-navigation-for-text-fields +categories: +--- + + + +Here is a class for enabling the use of arrow keys to navigate through a grid of input fields: (using mootools) + +```javascript +var FocusMover = new Class({ + + initialize: function(sel, col_num){ + + this.sel = sel + this.col_num = col_num + this.inputs = $$(this.sel) + this.current_focus = 0 + + var self = this + + this.inputs.each(function(item, index){ + item.addEvent('keydown',function(key){ + $try(function(){ + self[key.key]() + }) + }) + item.addEvent('focus',function(e){ + self.refresh(e) + }) + + item.set('myid', index) + }) + + this.inputs[0].focus() + + }, + + + refresh: function(e){ + this.current_focus = e.target.get('myid') + }, + + down: function(){ + i = parseInt(this.current_focus) + parseInt(this.col_num) + this.inputs[i].focus() + }, + + up: function(){ + i = parseInt(this.current_focus) - parseInt(this.col_num) + this.inputs[i].focus() + }, + + left: function(){ + i = parseInt(this.current_focus) - 1 + this.inputs[i].focus() + }, + + right: function(){ + i = parseInt(this.current_focus) + 1 + this.inputs[i].focus() + } + +}) +``` + +As you can see, the constructor takes two arguments: a selector (which should return a list of all your input fields), and the number of input field columns. So for a 4x2 table, you would set it up like this: + +```javascript +var FM = new FocusMover('#mytable input', 4) +``` diff --git a/content/post/2008-10-28-so-you-want-to-click-that-button.markdown b/content/post/2008-10-28-so-you-want-to-click-that-button.markdown new file mode 100644 index 0000000..a73a332 --- /dev/null +++ b/content/post/2008-10-28-so-you-want-to-click-that-button.markdown @@ -0,0 +1,59 @@ +--- +layout: post +title: "So you want to click that button?" +date: 2008-10-28T21:05:00Z +comments: false +permalink: /post/56866975/so-you-want-to-click-that-button +categories: +--- + + + +I stumbled upon[http://clickthatbutton.com](http://clickthatbutton.com) during my routine lurking of[hacker news](http://news.ycombinator.com) . After being amused for about 10 seconds, I decided to take it to the next level; I wanted to click on it really, really fast. After going through a few solutions (simple js while loop in firebug, then curl/wget) and failing, the idea of using selenium popped into my head. So I went off to their[site](http://selenium-ide.openqa.org/download.jsp) and installed the extension. I figured a simple recording of the mouse event, then wrapping it around a loop in selenium would do the trick, but I quickly found that selenium doesn’t support loops. Not to be stopped, I searched google and ended up with[this](http://51elliot.blogspot.com/2008/02/selenium-ide-goto.html) . After installing the plugin for selenium (a plugin for a plugin!?) and restarting firefox, I tried it again and to my surprise it worked! The click counter was going up steadily on its own (18k clicks and counting). Here is my selenium test case for those of you following along: + +```html +<?xml version="1.0" encoding="UTF-8"?> +<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> +<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> +<head profile="http://selenium-ide.openqa.org/profiles/test-case"> +<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> +<link rel="selenium.base" href="http://clickthatbutton.com/" /> +<title>haha</title> +</head> +<body> +<table cellpadding="1" cellspacing="1" border="1"> +<thead> +<tr><td rowspan="1" colspan="3">haha</td></tr> +</thead><tbody> +<tr> + <td>open</td> + <td>/</td> + <td></td> +</tr> +<tr> + <td>store</td> + <td>x</td> + <td>1</td> +</tr> +<tr> + <td>while</td> + <td>storedVars['x'] == storedVars['x']</td> + <td></td> +</tr> +<tr> + <td>click</td> + <td>submit</td> + <td></td> +</tr> +<tr> + <td>endWhile</td> + <td></td> + <td></td> +</tr> + +</tbody></table> +</body> +</html> +``` + +Just paste that into a file, open it with selenium ide, hit play and you should be good to go. diff --git a/content/post/2008-9-27-highlight-link-based-on-current-page-in-rails.markdown b/content/post/2008-9-27-highlight-link-based-on-current-page-in-rails.markdown new file mode 100644 index 0000000..e696208 --- /dev/null +++ b/content/post/2008-9-27-highlight-link-based-on-current-page-in-rails.markdown @@ -0,0 +1,31 @@ +--- +layout: post +title: "Highlight link based on current page in rails" +date: 2008-09-27T19:47:00Z +comments: false +permalink: /post/52081481/highlight-link-based-on-current-page-in-rails +categories: +--- + + + +This is common pattern in website navigation, where it highlights the link (usually by setting `class=”active”`) that took you to the current page while you are on that page. + +First, define a helper: + +```ruby +def is_active?(page_name) + "active" if params[:action] == page_name +end + +``` + +Then call it in your link_to’s in your layout as such: + +```ruby +link_to 'Home', '/', :class => is_active?("index") +link_to 'About', '/about', :class => is_active?("about") +link_to 'contact', '/contact', :class => is_active?("contact") +``` + +This effect is achieved due to how link_to handles being passed `nil` for its `:class`, so when `is_active?` returns `nil` (because its not the current page), `link_to` outputs nothing as its class (not `class=””` as you might expect). diff --git a/content/post/2008-9-30-why-mootools-or-why-not-jquery.markdown b/content/post/2008-9-30-why-mootools-or-why-not-jquery.markdown new file mode 100644 index 0000000..d7b3acc --- /dev/null +++ b/content/post/2008-9-30-why-mootools-or-why-not-jquery.markdown @@ -0,0 +1,35 @@ +--- +layout: post +title: "Why MooTools (or Why not JQuery)" +date: 2008-09-30T10:26:00Z +comments: false +permalink: /post/52467447/why-mootools-or-why-not-jquery +categories: +--- + +##UPDATE 2012: this post is dumb and angsty, dont read + +I’ve been toying around with MooTools a bit lately, and I’ve found the experience quite enjoyable and refreshing. Naturally, I [twittered](http://twitter.com/capotej/statuses/939831956) about it and went along my merry way. Moments later (and much to my surprise), I had a direct message from John Resig himself asking “Why, what’s wrong with jQuery?”. I was pretty taken aback that he would take time from his surely busy day to message a total stranger in an effort to improve his project or at least gain an insight in the everyday life of a js developer (it’s not like DHH would personally message people that are dumping rails to use merb). I figured he deserved a straight, honest answer; One that at least would be longer than [140 characters](http://twitter.com/capotej/statuses/940082809) (even though I managed to use every single one). So it begs the question, Why MooTools? + +* Class support. JQuery’s SQL-like syntax is fine for quick and dirty javascripting, but eventually you’ll want real classes to structure your UI logic. + +* It smells, feels and tastes like regular javascript. JQuery doesn’t even look like javascript, which isn’t necessarily a bad thing, since that’s kind of their goal. MooTools however, feels like just an extension of the language + +* Faster. [‘Nuff Said](http://mootools.net/slickspeed/) EDIT: This was pointed out to be false; It is only faster in certain cases (such as mine, WebKit nightly on OS X). + +* Robert Penner’s easing equations baked right in.This could just be me, but I find the animations that mootools creates are alot smoother than JQuery’s (especially the easing). + +* Creating new DOM elements is a snap.Need to create a dom element? `var el = new Element(‘a’, { ‘href’: ‘juliocapote.com’});` Done. + +* Modular. I like that I can just build and pull down a moo.js that only contains the functionality I need. + +* Better Documented.Or at least, its faster to find what you need. + +* Easier to hack on and extend. While I haven’t personally delved into the internals of either system, the consensus seems to be that jquery is an unintelligible mess when it comes to modifying how it works. + +* Prototype Approach (versus a namespaced approach) This is really just matter of preference; MooTools achieves it’s magic by just extending the prototypes of common objects (Array, String, etc); While this is obstrusive, it makes for shorter, more natural code. JQuery does its thing via a main object (which you can name, hence the namespace), that you wrap around whatever you want to make magical; This is unobstrusive, but you pay for that by having to wrap anything you want to use (which ends up being everything). It basically boils down to arr.each(fn) vs $.each(arr, fn) + +* It’s not a revolution. It feels as if JQuery is trying to take on the world (it seems like it too, since its now included with visual studio and the nokia sdk). However, I’m not; I’m just trying to write some javascript here. + + +It’s not like I’m never going to use JQuery again; It simply isn’t my default js framework any longer. diff --git a/content/post/2009-1-1-useful-rails-routing-tips.markdown b/content/post/2009-1-1-useful-rails-routing-tips.markdown new file mode 100644 index 0000000..ef6525e --- /dev/null +++ b/content/post/2009-1-1-useful-rails-routing-tips.markdown @@ -0,0 +1,75 @@ +--- +layout: post +title: "Useful Rails Routing tips" +date: 2009-01-02T15:50:00Z +comments: false +permalink: /post/67873462/useful-rails-routing-tips +categories: +--- + + + +Even though I have been using Rails for fun and profit for about 2 years now, I felt I never really used it’s routing engine to its full potential. So I checked out new [Rails Routing from the outside in](http://guides.rubyonrails.org/routing_outside_in.html) guide and discovered bunch of useful tricks that I (and maybe you) had no idea you could do. Here they are: + +### Multiple resource definitions on a single line + +```ruby +map.resources :photos, :books, :videos +``` + +### Impose a certain format for resource identifiers + +```ruby +map.resources :photos, :requirements => { :id => /[A-Z][A-Z][0-9]+/ } +``` + +This way, `/photos/3` would not work, but `/photos/DA321` would. + +### Friendlier action names + +Say for your application ‘create’ and ‘change’ make more sense than the default ‘new’ and ‘edit’ you can do + +```ruby +map.resources :photos, :path_names => { :new => 'make', :edit => 'change' } +``` + +You can also do this site-wide also, in your environment.rb + +```ruby +config.action_controller.resources_path_names = { :new => 'make', :edit => 'change' } +``` + +### Trim the fat off resources with :only and :except + + +When you use map.resources, rails generates 7 restful routes for that resource; But what if that resource only needed to be seen and listed, never edited or created? + +```ruby +map.resources :photos, :only => [:index, :show] +``` + +If your application uses a lot of `map.resources` calls but not neccesarily all its generated routes, you can save memory this way. + +### Adding extra routes to your resources + +Instead of fighting the `map.resources` generator by placing a horror like this atop your routes.rb + +```ruby +map.connect '/photos/:id/preview', { :controller => 'photos', :action => 'preview' } +``` + +You can do this to your already mapped resource + +```ruby +map.resources :photos, :member => { :preview => :get } +``` + +This will map all GET’s to `/photos/3` to the preview action of your photos controller + +This can also be used in collections instead of singular members, just change `:member` to `:collection` + +```ruby +map.resources :photos, :collection => { :search => :get } +``` + +This will give you `/photos/search` and hit the search action within the photos controller diff --git a/content/post/2009-7-19-using-rack-applications-inside-gwt-hosted-mode.markdown b/content/post/2009-7-19-using-rack-applications-inside-gwt-hosted-mode.markdown new file mode 100644 index 0000000..aaa1da1 --- /dev/null +++ b/content/post/2009-7-19-using-rack-applications-inside-gwt-hosted-mode.markdown @@ -0,0 +1,117 @@ +--- +layout: post +title: "Using Rack applications inside GWT Hosted mode" +date: 2009-07-19T18:51:00Z +comments: false +permalink: /post/145035194/using-rack-applications-inside-gwt-hosted-mode +categories: +--- + + + +This guide will show you how you can use JRuby to run any Rack application inside Google Web Toolkit’s (GWT) hosted mode server so your interface and your backend are of the Same Origin. + +BackgroundGWT has two ways of interacting with a server: GWT Remote Procedure Call (RPC) and plain HTTP (XHR). GWT-RPC is a high level library designed for interacting with server-side Java code. GWT-RPC implements the GWT Remote Service interface allowing you to call those methods from the user interface. Essentially, GWT handles the dirty work for you. However, it only works on Java backends that can implement that interface. Since most of my backends are Sinatra/Rack applications, I’ll be using the plain HTTP library. + +The problemDue to the restriction of the [Same Origin](http://en.wikipedia.org/wiki/Same_origin_policy) policy, the interface served out of GWT’s development, or Hosted Mode server can only make requests back to itself. If you were using real servlets or GWT’s RemoteService this wouldn’t be an issue; but since Rack applications listen on their own port, you cannot make requests from GWT to our application without resorting to something like JSONP or server-side proxying. This leaves you having to compile our interface to HTML/JS/CSS, which is lengthy process, and serve it from the origin of the Rack application to see our changes. + +The solutionSince I wanted to develop using GWT’s development environment with a Rack backend, I devised a way to use jruby-rack to load arbitrary Rack applications alongside our interface. + +First let’s setup our environment: + +###Download and unpack the latest GWT for your platform (mine’s being linux) + +```sh +wget http://google-web-toolkit.googlecode.com/files/gwt-linux-1.7.0.tar.bz2 +tar -xvjpf gwt-linux-1.7.0.tar.bz2cd gwt-linux-1.7.0 +``` + +###Download the latest jruby-complete.jar + +```sh +wget http://repository.codehaus.org/org/jruby/jruby-complete/1.3.1/jruby-complete-1.3.1.jar +mv jruby-complete-1.3.1.jar jruby-complete.jar +``` + +###Download the latest jruby-rack.jar + +```sh +wget http://repository.codehaus.org/org/jruby/rack/jruby-rack/0.9.4/jruby-rack-0.9.4.jar +mv jruby-rack-0.9.4.jar jruby-rack.jar +``` + +###Create an app with webAppCreator +```sh +./webAppCreator -out MySinatra com.example.MySinatra +cd MySinatra +``` + + +###Package gem dependencies +In order for this to work you have to package any gem dependencies your backend needs (sinatra, in our case) as jars within your application. For Sinatra it looks like this: + +```sh +java -jar jruby-complete.jar -S gem install -i ./sinatra sinatra --no-rdoc --no-ri +jar cf sinatra.jar -C sinatra . +``` + +###Add jruby-complete.jar, jruby-rack.jar, sinatra.jar (and any other jars you’ve created) to the libs target of your build.xml + +```xml +<target name="libs" description="Copy libs to WEB-INF/lib"> + <mkdir dir="war/WEB-INF/lib" /> + <copy todir="war/WEB-INF/lib" file="${gwt.sdk}/gwt-servlet.jar" /> + <!-- Add any additional server libs that need to be copied --> + <copy todir="war/WEB-INF/lib" file="${gwt.sdk}/jruby-complete.jar" /> + <copy todir="war/WEB-INF/lib" file="${gwt.sdk}/jruby-rack.jar" /> + <copy todir="war/WEB-INF/lib" file="${gwt.sdk}/sinatra.jar" /> +</target> +``` + +###Add these lines right after <web-app> in war/WEB-INF/web.xml +```xml +<context-param> + <param-name>rackup</param-name> + <param-value> + require 'rubygems' + require './lib/sinatra_app' + map '/api' do + run MyApp + end + </param-value> +</context-param> +<filter> + <filter-name>RackFilter</filter-name> + <filter-class>org.jruby.rack.RackFilter</filter-class> +</filter> +<filter-mapping> + <filter-name>RackFilter</filter-name> + <url-pattern>/api/*</url-pattern> +</filter-mapping> +<listener> + <listener-class>org.jruby.rack.RackServletContextListener</listener-class> +</listener> +``` + +Note: All you’re doing here is passing the contents of a config.ru file into the `<param-value>` element for the `<context-param>` (make sure this is HTML encoded!). This states that any request to /api is to be handled by your Sinatra application and not GWT’s Hosted mode servlet. + +###Create your Sinatra backend and place it in war/WEB-INF/lib/sinatra_app.rb + +```ruby +require 'sinatra' +require 'open-uri' +class MyApp < Sinatra::Base + get '/showpage' do + open('http://www.yahoo.com').read + end + + get '/helloworld' do + 'hello world' + end +end +``` + +###Run your new awesome setup +`ant hosted` + +Now when navigate to [http://localhost:8888/api/helloworld](http://localhost:8888/api/helloworld) or [http://localhost:8888/api/showpage](http://localhost:8888/api/showpage) you should see the Sinatra application being served via GWT. diff --git a/content/post/2010-12-31-what-i-released-in-2010.markdown b/content/post/2010-12-31-what-i-released-in-2010.markdown new file mode 100644 index 0000000..93912fa --- /dev/null +++ b/content/post/2010-12-31-what-i-released-in-2010.markdown @@ -0,0 +1,40 @@ +--- +layout: post +title: "What I released in 2010" +date: 2010-12-31T13:29:00Z +comments: false +permalink: /post/2546786852/what-i-released-in-2010 +categories: +--- + + + +Here’s a recap of what I’ve worked on and released in 2010: + +###[Youtube Fraiche](https://github.com/capotej/youtube_fraiche) + +I couldn’t find a youtube downloader that worked on github, so I wrote my own one evening + +###[Uploadd](https://github.com/capotej/uploadd) and [paperclip_uploadd](https://github.com/capotej/paperclip_uploadd) +I wanted to upload and store images off-site (using paperclip/rails) on an server which has cheaper bandwidth rates than S3. Using rainbows, this tiny rack script has handled over a 1.5 million uploads at a peak of 10-15 uploads/sec. Also, it’s been running for about 6 months now without a single crash. Thank you Eric Wong! +There is also a plugin for the popular paperclip gem to use uploadd as a storage backend transparently. + +###[mrskinner](https://github.com/capotej/mrskinner/blob/master/mrskinner.js) + +Tiny javascript for making the site gutters clickable based on a fixed width layout + +###[existential](https://github.com/capotej/existential) + +Completely inspired by Nick Kallen’s [post](http://pivotallabs.com/users/nick/blog/articles/272-access-control-permissions-in-rails) on authorization, I wanted to extract that pattern into a rails plugin that I could use for all my projects. I use devise/existential for all my projects now. + +###[has_opengraph](https://github.com/capotej/has_opengraph) + +Easy way to participate in opengraph and draw facebook like buttons. Just annotate your models with meta data, and draw it in your view easily. + +###[chewbacca](https://github.com/capotej/chewbacca) + +I kinda feel bad that I took a cool name for such a lame script. Anyway it’s a set of rake tasks that provide a hair of abstraction above scp. Useful when you have a set of files locally that map to a different set of files remotely. + + + +I already have tons of stuff in the works for 2011! diff --git a/content/post/2011-1-3-migrationfor-write-migrations-right-from-the-command.markdown b/content/post/2011-1-3-migrationfor-write-migrations-right-from-the-command.markdown new file mode 100644 index 0000000..406183b --- /dev/null +++ b/content/post/2011-1-3-migrationfor-write-migrations-right-from-the-command.markdown @@ -0,0 +1,68 @@ +--- +layout: post +title: "MigrationFor: Write migrations right from the command line!" +date: 2011-01-03T09:48:00Z +comments: false +permalink: /post/2583891119/migrationfor-write-migrations-right-from-the-command +categories: +--- + + + +As someone who mostly stays in the rails console, I’ve always hated forgetting a field, creating a migration, finding it among your other 500 migration files, then adding the one line you need to add, then running it. This is probably the most annoying part of the Rails experience. I’ve always wanted to write a better migration generator that could take a list of commands/fields and write the migration for you, since most of the time what you name a migration has all the info it needs (add_index_to_post_id). Thanks to the heavily refactored plugin/generator API in Rails 3, I was able to do just that. + +Let’s take a look at how it works: + +First, install it (only works for Rails 3) + +`rails plugin install git://github.com/capotej/migration_for.git` + +Then, you can create migrations like so: + +`rails g migration_for add_index:posts:posts_id` + +It would generate db/migrate/20110103182654_add_index_posts_posts_id.rb): + +```ruby +class AddIndexPostsPostsId < ActiveRecord::Migration + + def self.up + add_index 'posts','posts_id' + end + + def self.down + #waiting for reversible migrations in rails 3.1! + end + +end +``` + +Which you can then run normally with `rake db:migrate` + +Let’s look at a more complex example: + +`$ rails g migration_for create_table:posts add_column:posts:title:string add_column:posts:user_id:integer +` + +Would generate: + +```ruby +class CreateTablePostsaddColumnPostsTitleStringaddColumnPostsUserIdIntegeraddIndexPostsUserId < ActiveRecord::Migration + + def self.up + create_table 'posts' + add_column 'posts','title','string' + add_column 'posts','user_id','integer' + add_index 'posts','user_id' + end + + def self.down + #waiting for reversible migrations in rails 3.1! + end + +end +``` + +It uses a lookup table with all the [activerecord transformations](http://api.rubyonrails.org/classes/ActiveRecord/Migration.html) and will only insert an expression into a migration if the method name is valid and it has the right number of arguments, so botched commands wont mess up the migration. Hope you enjoy it as much as I have! + +Source available here: [https://github.com/capotej/migration_for](https://github.com/capotej/migration_for) diff --git a/content/post/2011-9-13-render-image-links-directly-inside-adium.markdown b/content/post/2011-9-13-render-image-links-directly-inside-adium.markdown new file mode 100644 index 0000000..6062e02 --- /dev/null +++ b/content/post/2011-9-13-render-image-links-directly-inside-adium.markdown @@ -0,0 +1,24 @@ +--- +layout: post +title: "Render image links directly inside Adium" +date: 2011-09-13T08:59:00Z +comments: false +permalink: /render-image-links-directly-inside-adium +categories: +--- + + + +Last night I delightfully discovered that Adium Message Styles are just html, css, and javascript rendered inside a webview. The next natural step was to write something in it, so I wrote a Message Style that tries to render any image link directly inline the conversation (campfire style). + +![](/images/blog/adium1.png) + +The code was written at midnight after a long day, so its not best. Basically, it's a setInterval that runs every 2.5 seconds that loops through all message elements, appending an img tag to the body of the message if an image link is detected. It also removes the processing class as to not reprocess the same messages. + +Installation is simple, just download: + +[http://dl.dropbox.com/u/42561/Stockholm.AdiumMessageStyle.zip](http://dl.dropbox.com/u/42561/Stockholm.AdiumMessageStyle.zip) + +and extract into ~/Library/Adium 2.0/Message Styles (create if necessary). Then choose the TOP Stockholm theme (no idea why there are two entries), and close your chat window. It should be activated next time a chat window opens. + +![](/images/blog/adium2.png) diff --git a/content/post/2012-1-25-finagle-with-scala-bootstrapper.markdown b/content/post/2012-1-25-finagle-with-scala-bootstrapper.markdown new file mode 100644 index 0000000..93988e3 --- /dev/null +++ b/content/post/2012-1-25-finagle-with-scala-bootstrapper.markdown @@ -0,0 +1,133 @@ +--- +layout: post +title: "Finagle with scala-bootstrapper" +date: 2012-01-25T09:45:00Z +comments: false +permalink: /finagle-with-scala-bootstrapper +categories: +--- + + + +I've been fascinated by the concepts in [finagle](http://twitter.github.com/finagle/) for some time, but being a scala noob, I never knew how to bootstrap a finagle project. Turns out twitter has a gem, [scala-bootstrapper](https://github.com/twitter/scala-bootstrapper), that generates a simple thirft based key/value store for you. There's even a[tutorial](http://twitter.github.com/scala_school/searchbird.html) on how to extend the example project into a distributed search service. + +This is a guide on setting it all up locally, it assumes you have Git, Homebrew, and OS X. + +###Install scala 2.8.1 + +```sh +$ brew versions scala +$ cd/usr/local/(or wherever you have homebrew installed) +$ git checkout -b scala281 0e16b9d(make sure the SHA matches versions output) +$ brew install scala +$ git checkout master$git branch -D scala281 +``` + +###Install sbt 0.7.4 (assumes you have a ~/bin in your $PATH) + +```sh +$ curl -O http://simple-build-tool.googlecode.com/files/sbt-launch-0.7.4.jar > ~/bin/sbt-launch.jar +$ echo 'java -Xmx1G -jar `dirname $0`/sbt-launch.jar "$@"'> ~/bin/sbt +``` + +###Install scala-bootstrapper + +```sh +$ gem install scala-bootstrapper +``` + +###Generate finagle project** + +```sh +$ mkdir newbird +$ cd newbird +$ scala-bootstrapper newbird +$ sbt update +$ sbt test +``` + +###Add a Client class + +create newbird/src/main/scala/com/twitter/newbird/Client.scala with + +```scala +package com.twitter.newbird + +import com.twitter.finagle.builder.ClientBuilder +import com.twitter.finagle.thrift.ThriftClientFramedCodec +import com.twitter.newbird.thrift._ +import org.apache.thrift.protocol.TBinaryProtocol + +import java.net.InetSocketAddress + +class Client { + + val service = ClientBuilder().hosts(Seq(newInetSocketAddress("localhost",9999))) + .codec(ThriftClientFramedCodec()) + .hostConnectionLimit(1) + .build() + val client = new NewbirdServiceClientAdapter( + new thrift.NewbirdService.ServiceToClient(service,newTBinaryProtocol.Factory)) + + def get(key:String) = client.get(key)() + def put(key:String, value:String) = client.put(key,value)() + +} + +``` + +###Running the server + +```sh +$ cd newbird +$ sbt> run -f config/development.scala + +``` + +###Playing with the client** + + +```sh +$ cd newbird +$ sbt console +scala> import com.twitter.newbird.Client +scala> val client = new Client() +scala> client.put("foo","bar") +scala> client.get("foo") +``` + +###Bonus + +finagle exports a stats url you can curl: + +```sh +$ curl http://localhost:9900/stats.txt +counters: Newbird/connects: 1 +Newbird/requests: 4 +Newbird/success: 4 +gauges: + Newbird/connections: 0 + Newbird/pending: 0 + jvm_heap_committed: 588251136 + jvm_heap_max: 2146828288 + jvm_heap_used: 64354560 + jvm_nonheap_committed: 83267584 + jvm_nonheap_max: 318767104 + jvm_nonheap_used: 68655360 + jvm_num_cpus: 4 + jvm_start_time: 1327511164928 + jvm_thread_count: 14 + jvm_thread_daemon_count: 9 + jvm_thread_peak_count: 14 + jvm_uptime: 2626505 +labels: + metrics: + Newbird/connection_duration: (average=2590412, count=1, maximum=2590412, minimum=2590412, p25=2590412, p50=2590412, p75=2590412, p90=2590412, p99=2590412, p999=2590412, p9999=2590412) + Newbird/connection_received_bytes: (average=192, count=1, maximum=192, minimum=192, p25=192, p50=192, p75=192, p90=192, p99=192, p999=192, p9999=192) + Newbird/connection_requests: (average=4, count=1, maximum=4, minimum=4, p25=4, p50=4, p75=4, p90=4, p99=4, p999=4, p9999=4) + Newbird/connection_sent_bytes: (average=120, count=1, maximum=120, minimum=120, p25=120, p50=120, p75=120, p90=120, p99=120, p999=120, p9999=120) + Newbird/request_latency_ms: (average=14, count=4, maximum=39, minimum=2, p25=2, p50=8, p75=10, p90=39, p99=39, p999=39, p9999=39) +``` + + + diff --git a/content/post/2012-1-9-alfred-extension-for-creating-wunderlist-task.markdown b/content/post/2012-1-9-alfred-extension-for-creating-wunderlist-task.markdown new file mode 100644 index 0000000..9e17c6a --- /dev/null +++ b/content/post/2012-1-9-alfred-extension-for-creating-wunderlist-task.markdown @@ -0,0 +1,26 @@ +--- +layout: post +title: "Alfred Extension for creating Wunderlist tasks" +date: 2012-01-09T23:36:00Z +comments: false +permalink: /alfred-extension-for-creating-wunderlist-task +categories: +--- + + + +While looking for a way to add wunderlist tasks via alfred, I came upon [wunderlist-for-alfred](http://jdfwarrior.tumblr.com/post/13163220116/wunderlist-for-alfred) + +Looked cool, but I wanted to write my own that didn't depend on php. + +I used ```lsof``` to figure out the location of the db, then used ```file``` to see what kind of db it was. Luckily, it was sqlite3, so I was able to poke around and figure out the sql to create a task. + +Here's the alfred extenstion that ties it all together: + +```sh +user=`whoami` +wunderdb="/Users/$user/Library/Wunderlist/wunderlist.db" +sqlite3 $wunderdb "insert into tasks (name, list_id) values ('{query}', 1)" +``` + +Download it [here](http://dl.dropbox.com/u/42561/wunderlist-capotej.alfredextension) diff --git a/content/post/2012-10-07-an-embedded-key-value-store-for-shell-scripts.markdown b/content/post/2012-10-07-an-embedded-key-value-store-for-shell-scripts.markdown new file mode 100644 index 0000000..a014a41 --- /dev/null +++ b/content/post/2012-10-07-an-embedded-key-value-store-for-shell-scripts.markdown @@ -0,0 +1,75 @@ +--- +layout: post +title: "an embedded key / value store for shell scripts" +date: 2012-10-07T10:06:00Z +comments: true +categories: ['shell scripting', 'databases'] +--- + +UPDATE: this is now available as a [sub](http://github.com/37signals/sub) command, here: [kiev](http://github.com/capotej/kiev) + +Cooked this up last night when I needed a simple key/value store for use in a shell script: + +```sh db.sh +#!/bin/sh + +DBFILE=example.db + +put(){ + echo "export kv_$1=$2" >> $DBFILE +} + +del(){ + echo "unset kv_$1" >> $DBFILE +} + +get(){ + source $DBFILE + eval r=\$$(echo "kv_$1") + echo $r +} + +list(){ + source $DBFILE + for i in $(env | grep "kv_" | cut -d= -f1 ); do + eval r=\$$i; echo $(echo $i | sed -e 's/kv_//') $r; + done +} + +## cmd dispatch + +if [ ${1:-0} == "set" ]; then + put $2 $3 +elif [ ${1:-0} == "get" ] ; then + get $2 +elif [ ${1:-0} == "list" ] ; then + list +elif [ ${1:-0} == "del" ] ; then + del $2 +else + echo "unknown cmd" +fi +``` + +Use it like so: + + +`$ ./db.sh set foo bar` + +`$ ./db.sh get foo` + +`$ ./db.sh set foo baz` + +`$ ./db.sh get foo` + +`$ ./db.sh del foo` + +`$ ./db.sh list` + + +## How it works + +Every time you update/set/delete a value, it writes a shell expression to an append-only log, +exporting a shell variable (key) with that value. By sourcing the file every time we read a value, we +replay the log, bringing our environment to a consistent state. Then, reading the value is just looking +up that dynamic variable (key) in our shell environment. diff --git a/content/post/2012-10-11-riak-at-posterous.markdown b/content/post/2012-10-11-riak-at-posterous.markdown new file mode 100644 index 0000000..7ef847f --- /dev/null +++ b/content/post/2012-10-11-riak-at-posterous.markdown @@ -0,0 +1,14 @@ +--- +layout: post +title: "riak at posterous" +date: 2012-10-11T13:47:00Z +comments: true +categories: ['riak', 'posterous', 'presentation'] +--- + +A few months ago, I gave a presentation on how Posterous uses Riak for it's post cache; At [#ricon2012](http://basho.com/community/ricon2012/) +I ended up retelling this story to numerous people, so I thought I'd post the slides and video here. + +<iframe src="http://player.vimeo.com/video/35905739?title=0&byline=0&portrait=0&color=000000" width="512" height="421" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe> + +<iframe src="http://www.slideshare.net/slideshow/embed_code/11160556?rel=0" width="512" height="421" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC;border-width:1px 1px 0;margin-bottom:5px" allowfullscreen> </iframe> diff --git a/content/post/2012-11-01-base-a-scala-project-generator.markdown b/content/post/2012-11-01-base-a-scala-project-generator.markdown new file mode 100644 index 0000000..a0e39f7 --- /dev/null +++ b/content/post/2012-11-01-base-a-scala-project-generator.markdown @@ -0,0 +1,40 @@ +--- +layout: post +title: "base: a scala project generator" +date: 2012-11-01T14:39:00Z +comments: true +categories: ["efficiency", "scala", "shell scripting"] +--- + +Finally got tired of copy pasting other projects and gutting them to make new ones, so I created [base](http://github.com/capotej/base), a shell command that creates new scala projects. + + +Creating the project: + +```sh +$ base new com.capotej.newproj +creating project: newproj + creating App.scala + creating AppSpec.scala + creating pom.xml + creating .gitignore + creating .travis.yml + creating LICENSE + creating README.markdown +Done! run mvn scala:run to run your projec +``` + +Based on the package name, it infered that the project name is ```newproj``` and created the project under that folder. Let's build and run it: + +```sh +$ cd newproj +$ mvn compile scala:run +(... maven output ...) +hello world +``` + +This uses the new incremental compiler for maven, [zinc](http://github.com/typesafehub/zinc), which dramatically speeds up compile times (except for the first time you run it). It also sets you up with the latest scalatest maven plugin, which gives you sweet looking test output, like so: + +![](http://i.imgur.com/qyyem.png) + +See the base [README](http://github.com/capotej/base#readme) for installation instructions. diff --git a/content/post/2012-11-07-announcing-finatra-1-0-0.markdown b/content/post/2012-11-07-announcing-finatra-1-0-0.markdown new file mode 100644 index 0000000..5507b1a --- /dev/null +++ b/content/post/2012-11-07-announcing-finatra-1-0-0.markdown @@ -0,0 +1,72 @@ +--- +layout: post +title: "announcing finatra 1.0.0" +date: 2012-11-07T21:20:00Z +comments: true +categories: ["finatra", "scala"] +--- + +After months of work [Finatra](https://github.com/capotej/finatra#readme) 1.0.0 is finally available! Finatra is a scala web framework inspired by [Sinatra](https://github.com/sinatra/sinatra#readme) built on top of [Finagle](http://twitter.github.com/finagle). + +### The API + +The API looks like what you'd expect, here's a simple endpoint that uses route parameters: + +```scala +get("/user/:username") { request => + val username = request.routeParams.getOrElse("username", "default_user") + render.plain("hello " + username).toFuture +} +``` + +The ```toFuture``` call means that the response is actually a [Future](http://twitter.github.com/scala_school/finagle.html#Future), a powerful concurrency abstraction worth checking out. + +Testing it is just as easy: + +```scala +"GET /user/foo" should "responsd with hello foo" in { + get("/user/foo") + response.body should equal ("hello foo") +} +``` + +### A super quick demo + +```sh +$ git clone https://github.com/capotej/finatra.git +$ cd finatra +$ ./finatra new com.example.myapp /tmp +``` + +Now you have an ```/tmp/myapp``` you can use: + +```sh + +$ cd /tmp/myapp +$ mvn scala:run +``` + +A simple app should've started up locally on port 7070, verify with: + +```sh +$ curl http://locahost:7070 +hello world +``` + +You can see the rest of the endpoints at ```/tmp/myapp/src/main/scala/com/example/myapp/App.scala``` + +### Heroku integration + +The generated apps work in heroku out of the box: + +```sh +$ heroku create +$ git init +$ git add . +$ git commit -am 'stuff' +$ git push heroku master +``` + +Make sure to see the full details in the [README](https://github.com/capotej/finatra#readme) and check out the [example app](https://github.com/capotej/finatra-example). + +Props to [@twoism](http://twitter.com/twoism) and [@thisisfranklin](http://twitter.com/thisisfranklin) for their code, feedback and moral support. diff --git a/content/post/2012-11-13-automatic-high-quality-releases.markdown b/content/post/2012-11-13-automatic-high-quality-releases.markdown new file mode 100644 index 0000000..55500bc --- /dev/null +++ b/content/post/2012-11-13-automatic-high-quality-releases.markdown @@ -0,0 +1,116 @@ +--- +layout: post +title: "automatic high quality releases" +date: 2012-11-13T21:31:00Z +comments: true +categories: ["shell scripting", "finatra"] +--- + +Recently, I invested some time into automating some of the work that goes into a [Finatra](http://github.com/capotej/finatra#readme) release. + +The work consists of updating: + +* The version in the XML fragment of the main [README.markdown](https://github.com/finatra/https://github.com/capotej/finatra/blob/master/README.markdown) + +* The version in the [pom.xml](https://github.com/capotej/finatra-example/blob/master/pom.xml) of the [example app](http://github.com/capotej/finatra-example) + +* Any API changes in the [example app](http://github.com/capotej/finatra-example) + +* The version in the template pom.xml of the app generator + +* The generated unit test of the app generator that demonstrates testing any new API features + +* Any API changes inside the app template of the app generator + +Using [sub](https://github.com/37signals/sub#readme), I was able to create a `finatra` [command](https://github.com/capotej/finatra/tree/master/script/finatra) that automated all of the above based on a single template, which also happens to be the [main unit test](https://github.com/capotej/finatra/blob/master/src/test/scala/com/twitter/finatra/ExampleSpec.scala). This ensures that the README, the example app, and the app generator never fall out of sync with the frameworks API. + + +Last week we released [1.1.0](https://github.com/capotej/finatra/commit/37c81957271dde77d4c3f6361bbae705a5142c89), and the README was [completely generated](https://github.com/capotej/finatra/commit/913d0ed5bfa18c903feb5779d4d8b9d87703b6c5), as was the [example app](https://github.com/capotej/finatra-example/commit/dbc82908360f3cb4cfc4388c28f593f17258fab2). Not to mention, all generated apps would also contain the latest templates and examples! + +![](http://i0.kym-cdn.com/photos/images/original/000/021/073/1254172884282.jpg?1254173845) + +Let's dive into how it all works: + +## The source of truth + +I annotated our main unit test with special tokens, like so: + +```scala ExampleAppSpec.scala +class ExampleSpec extends SpecHelper { + + /* ###BEGIN_APP### */ + + class ExampleApp extends Controller { + + /** + * Basic Example + * + * curl http://localhost:7070/hello => "hello world" + */ + get("/") { request => + render.plain("hello world").toFuture + } + + } + + val app = new ExampleApp + + /* ###END_APP### */ + + + /* ###BEGIN_SPEC### */ + + "GET /hello" should "respond with hello world" in { + get("/") + response.body should equal ("hello world") + } + + /* ###END_SPEC### */ +} +``` + +Using the special `/* ### */` comments, the main app and its test can be extracted from the code of our test. + +## The app generator + +Now that we have our "template", we can build our app generator to use it. I customized [base](http://capotej.com/blog/2012/11/01/base-a-scala-project-generator/) and ended up with: [script/finatra/libexec/finatra-new](https://github.com/capotej/finatra/blob/master/script/finatra/libexec/finatra-new) + +You can then run: + +```sh +$ ./finatra new com.example.myapp +``` + +and it will generate ```myapp/``` based on the tested example code from the test suite above. + +## The example app + +The [example app](https://github.com/capotej/finatra-example#readme) is just a generated app using the latest app generator: + +```sh + +#!/bin/bash +# Usage: finatra update-example +# Summary: generates the example app from the template + +set -e + +source $_FINATRA_ROOT/lib/base.sh + +tmpdir=$(mktemp -d /tmp/finatra_example.XXX) + +$_FINATRA_ROOT/bin/finatra new com.twitter.finatra_example $tmpdir + +cp -Rv $tmpdir/finatra_example/ $EXAMPLE_REPO + +rm -rf $tmpdir + +cd $EXAMPLE_REPO && mvn test + +``` + +This also tests the app generator and the generated app! + +## Updating the README + +Lastly, there's a [command](https://github.com/capotej/finatra/blob/master/script/finatra/libexec/finatra-update-readme) for updating the README with the new example and version number. diff --git a/content/post/2013-07-28-playing-with-groupcache.markdown b/content/post/2013-07-28-playing-with-groupcache.markdown new file mode 100644 index 0000000..b090154 --- /dev/null +++ b/content/post/2013-07-28-playing-with-groupcache.markdown @@ -0,0 +1,84 @@ +--- +layout: post +title: "Playing with groupcache" +date: 2013-07-28T14:49:00Z +comments: true +categories: ["go", "databases", "distributed computing"] +--- + +This week, [@bradfitz](http://twitter.com/bradfitz) (of memcached fame) released [groupcache](http://github.com/golang/groupcache) at OSCON 2013. I'm already a big fan of [memcached](http://memcached) and [camlistore](http://camlistore.org), so I couldn't wait to download it and kick the tires. + +By the way, I **strongly** recommend you go through the [slides](http://talks.golang.org/2013/oscon-dl.slide#1) and [README](http://github.com/golang/groupcache) before going further. + +## What groupcache isn't +After downloading it (without reading the [slides](http://talks.golang.org/2013/oscon-dl.slide#1)), I instinctively searched around for how to actually start the server(s), only to find nothing. Turns out, groupcache is more of a _library_ with a server built in, rather than a traditional standalone server. Another important consideration is that theres **no support for set/update/evict operations**, all you get is GET. Really fast, consistent, distributed GET's. + +## What it is +Once you realize that groupcache is more of a **smart, distributed LRU cache**, rather than an outright memcached replacement, it all makes much more sense. Especially considering what it was built for, caching immutable file blobs for [dl.google.com](http://dl.google.com). + +## How to use it +For groupcache to work, you have to give it a closure in which: given a ```key```, fill up this ```dest``` buffer with the bytes for the value of that key, from however you store them. This could be hitting a database, a network filesystem, anything. Then you create a groupcache ```group``` object, which knows the addresses of all the other groupcache instances. This is pluggable, so you can imagine rigging that up to zookeeper or the like for automatic node discovery. Finally, you start groupcache up by using go's built in ```net/http``` and a ```ServeHTP``` provided by the previously constructed ```group``` object. + +## Running the demo +In order to really try out groupcache, I realized I needed to create a mini test infrastructure, consisting of a slow database, frontends, and a client. Visit the [Github Repo](http://github.com/capotej/groupcache-db-experiment) for more details. This is what the topology looks like: +![groupcache topology](https://raw.github.com/capotej/groupcache-db-experiment/master/topology.png) + +#### Setup +1. ```git clone git@github.com:capotej/groupcache-db-experiment.git``` +2. ```cd groupcache-db-experiment``` +3. ```sh build.sh``` + +#### Start database server +1. ```cd dbserver && ./dbserver``` + +#### Start Multiple Frontends +1. ```cd frontend``` +2. ```./frontend -port 8001``` +3. ```./frontend -port 8002``` +4. ```./frontend -port 8003``` + +#### Use the CLI to play around + +Let's set a value into the database: + + ./cli -set -key foo -value bar + +Now get it out again to make sure it's there: + + ./cli -get -key foo + +You should see ```bar``` as the response, after about a noticeable, 300ms lag. + +Let's ask for the same value, via cache this time: + + ./cli -cget -key foo + +You should see on one of the frontend's output, the key ```foo``` was requested, and in turn requested from the database. Let's get it again: + + ./cli -cget -key foo + +You should have gotten this value instantly, as it was served from groupcache. + +Here's where things get interesting; Request that same key from a different frontend: + + ./cli -port 9002 -cget -key foo + +You should still see ```bar``` come back instantly, even though this particular groupcache node did not have this value. This is because groupcache knew that 9001 had this key, went to that node to fetch it, then cached it itself. **This is groupcache's killer feature**, as it avoids the common thundering herd issue associated with losing cache nodes. + +#### Node failure +Let's simulate single node failure, find the "owner" of key ```foo``` (this is going to be the frontend that said "asking for foo from dbserver"), and kill it with Ctrl+C. Request the value again: + + ./cli -cget -key foo + +It'll most likely hit the dbserver again (unless that particular frontend happens to have it), and cache the result on one of the other remaining frontends. As more clients ask for this value, it'll spread through the caches organically. When that server comes back up, it'll start receiving other keys to share, and so on. The fan out is explained in more detail on this [slide](http://talks.golang.org/2013/oscon-dl.slide#47). + +## Conclusion / Use cases +Since there is no support (by design) for eviction or updates, groupcache is a really good fit with read heavy, immutable content. Some use cases: + + * Someone like [Github](http://github.com) using it to cache blobrefs from their file servers + * Large websites using it as a CDN (provided their assets were unique ```logo-0492830483.png```) + * Backend for [Content-addressable storage](http://en.wikipedia.org/wiki/Content-addressable_storage) + +Definitely a clever tool to have in the distributed systems toolbox. + +_Shout out to professors [@jmhodges](http://twitter.com/jmhodges) and [@mrb_bk](http://twitter.com/mrb_bk) for proof reading this project and post_ diff --git a/content/post/2013-10-07-golang-http-handlers-as-middleware.markdown b/content/post/2013-10-07-golang-http-handlers-as-middleware.markdown new file mode 100644 index 0000000..c1249e9 --- /dev/null +++ b/content/post/2013-10-07-golang-http-handlers-as-middleware.markdown @@ -0,0 +1,56 @@ +--- +layout: post +title: "Golang http handlers as middleware" +date: 2013-10-07T08:52:00Z +comments: true +categories: ["go", "http"] +--- + +Most modern web stacks allow the "filtering" of requests via stackable/composable middleware, allowing you to cleanly separate cross-cutting concerns from your web application. This weekend I needed to hook into go's ```http.FileServer``` and was pleasantly surprised how easy it was to do. + +Let's start with a basic file server for ```/tmp```: + +```go main.go +func main() { + http.ListenAndServe(":8080", http.FileServer(http.Dir("/tmp"))) +} +``` + +This starts up a local file server at :8080. How can we hook into this so we can run some code before file requests are served? Let's look at the method signature for ```http.ListenAndServe```: + +```go +func ListenAndServe(addr string, handler Handler) error +``` + +So it looks like ```http.FileServer``` returns a ```Handler``` that knows how to serve files given a root directory. Now let's look at the ```Handler``` interface: + +```go +type Handler interface { + ServeHTTP(ResponseWriter, *Request) +} +``` + +Because of go's granular interfaces, any object can be a ```Handler``` so long as it implements ```ServeHTTP```. It seems all we need to do is construct our own ```Handler``` that wraps ```http.FileServer```'s handler. There's a built in helper for turning ordinary functions into handlers called ```http.HandlerFunc```: + +```go +type HandlerFunc func(ResponseWriter, *Request) +``` + +Then we just wrap ```http.FileServer``` like so: + +```go main.go +func OurLoggingHandler(h http.Handler) http.Handler { + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + fmt.Println(*r.URL) + h.ServeHTTP(w, r) + }) +} + +func main() { + fileHandler := http.FileServer(http.Dir("/tmp")) + wrappedHandler := OurLoggingHandler(fileHandler) + http.ListenAndServe(":8080", wrappedHandler) +} +``` + +Go has a bunch of other builtin [handlers](http://golang.org/pkg/net/http/#Handler) like [TimeoutHandler](http://golang.org/pkg/net/http/#TimeoutHandler) and [RedirectHandler](http://golang.org/pkg/net/http/#RedirectHandler) that can be mixed and matched the same way. diff --git a/content/post/2014-02-24-tmux-session-coloring.markdown b/content/post/2014-02-24-tmux-session-coloring.markdown index 425648a..0a8a851 100644 --- a/content/post/2014-02-24-tmux-session-coloring.markdown +++ b/content/post/2014-02-24-tmux-session-coloring.markdown @@ -1,7 +1,6 @@ --- layout: post title: "Tmux session coloring" -date: 2014-02-24 08:30 date: 2014-02-24T00:00:00Z comments: true tags: ["shell scripting", "tmux"] |