Search

3/29/2012

UNDOING IN GIT - RESET, CHECKOUT AND REVERT

If you make a commit that you later wish you hadn't, there are two fundamentally different ways to fix the problem:
1. You can create a new commit that undoes whatever was done by the old commit. This is the correct thing if your mistake has already been made public.
2. You can go back and modify the old commit. You should never do this if you have already made the history public; git does not normally expect the "history" of a project to change, and cannot correctly perform repeated merges from a branch that has had its history changed.

Fixing a mistake with a new commit
Creating a new commit that reverts an earlier change is very easy; just pass the git revert command a reference to the bad commit; for example, to revert the most recent commit:
$ git revert HEAD
This will create a new commit which undoes the change in HEAD. You will be given a chance to edit the commit message for the new commit. You can also revert an earlier change, for example, the next-to-last:
$ git revert HEAD^
In this case git will attempt to undo the old change while leaving intact any changes made since then. If more recent changes overlap with the changes to be reverted, then you will be asked to fix conflicts manually, just as in the case of resolving a merge.


Fixing a mistake by modifying a commit
If you have just committed something but realize you need to fix up that commit, recent versions of git commit support an --amend flag which instructs git to replace the HEAD commit with a new one, based on the current contents of the index.
$ git ci --amend
This gives you an opportunity to add files that you forgot to add or correct typos in a commit message, prior to pushing the change out for the world to see. If you find a mistake in an older commit, but still one that you have not yet published to the world, you use git rebase in interactive mode, with "git rebase -i" marking the change that requires correction with edit. This will allow you to amend the commit during the rebasing process.

3/20/2012

arRsync – an Rsync GUI for Mac OS X

arRsync – an Rsync GUI for Mac OS X

HOWTO Backup Your Mac With rsync

git code swarm

1. generate git log


[~/dev/aurora-prototype] $ git log --name-status --pretty=format:'%n------------------------------------------------------------------------%nr%h | %ae | %ai (%aD) | x lines%nChanged paths: %N'>ctivity.log


2. covert logs
python convert_logs.py -g activity.log  -o activity.xml



-g means it's a git log format



# This is a sample configuration file for code_swarm

# Frame width
Width=800

# Frame height
Height=600

# Input file
InputFile=data/aurora.xml

# Particle sprite file
ParticleSpriteFile=src/particle.png

#Font Settings
Font=SansSerif
FontSize=20
BoldFontSize=22

# Project time per frame
#MillisecondsPerFrame=21600000

# Maximum number of Background processes
MaxThreads=4

# Optional Method instead of MillisecondsPerFrame
FramesPerDay=18

# Background in R,G,B
Background=0,0,0

# Color assignment rules
# Keep in order, do not skip numbers. Numbers start
# at 1.
#
# Pattern: "Label", "regex", R,G,B, R,G,B
# Label is optional. If it is omitted, the regex
# will be used.
#
ColorAssign1="javascripts","(.*js.*)|(.*coffee.*)", 149,204,79, 149,204,79
ColorAssign2="styles","(.less)|(.css)|(.scss)", 255,255,0, 255,255,0
ColorAssign3="templates","(.htm)|(.html)|(.hbs)|(.mustache)", 255,0,0, 255,0,0
ColorAssign4="house keeping","(.sh)|(.rb)|(.log)|(.lock)|(Gemfile)|(.gitkeep)", 238,102,68, 238,102,68

# Save each frame to an image?
TakeSnapshots=true

# Where to save each frame
SnapshotLocation=frames/code_swarm-#####.png

# Draw names (combinatory) :
# Draw sharp names?
DrawNamesSharp=true
# And draw a glow around names? (Runs slower)
DrawNamesHalos=false

# Draw files (combinatory) :
# Draw sharp files
DrawFilesSharp=true
# Draw fuzzy files
DrawFilesFuzzy=false
# Draw jelly files
DrawFilesJelly=true

# Show the Legend at start
ShowLegend=true

# Show the History at start
ShowHistory=true

# Show the Date at start
ShowDate=true

# Show edges between authors and files, mostly for debug purpose
ShowEdges=true

# Turn on Debug counts.
ShowDebug=false

# Natural distance of files to people
EdgeLength=50

# Amount of life to decrement
EdgeDecrement=-2
FileDecrement=-2
PersonDecrement=-1

#Speeds.
#Optional: NodeSpeed=7.0, If used, FileSpeed and PersonSpeed need not be set.
#
FileSpeed=7.0
PersonSpeed=1.5

#Masses
FileMass=1.0
PersonMass=20.0

# Life of an Edge
EdgeLife=250

# Life of a File
FileLife=200

# Life of a Person
PersonLife=255

# Highlight percent.
# This is the amount of time that the person or
# file will be highlighted.
HighlightPct=10

## Physics engine selection and configuration
# Directory physics engine config files reside in.
PhysicsEngineConfigDir=physics_engine
# Force calculation algorithms ("PhysicsEngineLegacy", "PhysicsEngineSimple"...) :
PhysicsEngineSelection=PhysicsEngineLegacy

# OpenGL is experimental. Use at your own risk.
UseOpenGL=false

3/19/2012

http://www.youtube.com/watch?feature=player_embedded&v=CTSO0SDtPaw

TEDTalks 》Dan Pink 談叫人意想不到的激勵科學(中英字幕)

as long as the task involved only mechanical skill, bonuses worked as they would be expected: the higher the pay, the better the performance

but once the task called for "even rudimentary cognitive skill", a larger reward "led to poorer performance."


build around three things
. AUTONOMY 自主性 the urge to direct our own lives
. MASTER 掌握度 the desire to get better and better at something that matters
. PUROSE 使命感 the yearning to do what we do in the service of something larger than ourselves

paying people adequately and fairly, absolutely. then giving people lots of autonomy

secret of high performance isn't reward and punishments, but the unseen intrinsic drive. the drivce to do things for their own sake. the drive to do things cause they matter

How Basecamp Next got to be so damn fast without using much client-side UI - (37signals)

How Basecamp Next got to be so damn fast without using much client-side UI - (37signals)

#1: Stacker – an advanced pushState-based engine for sheets
The Stacker engine reduces HTTP requests on a per-page basis to a minimum by keeping the layout the same between requests. This is the same approach used by pjax and powered by the same HTML5 pushState.

This means that only the very first request spends time downloading CSS, JavaScript, and image sprites. Every subsequent request will only trigger a single HTTP request to get the HTML that changed and whatever additional images needed. You not only save the network traffic doing it like this, you also save the JavaScript compilation step.

It’s a similar idea to JavaScript-based one-page apps, but instead of sending JSON across the wire and implementing the whole UI in client-side MVC JavaScript, we just send regular HTML. From a programmers perspective, it’s just like a regular Rails app, except Stacker requests do not require rendering the layout.

So you get all the advantages of speed and snappiness without the degraded development experience of doing everything on the client. Which is made double nice by the fact that you get to write more Ruby and less JavaScript (although CoffeeScript does make that less of an issue).

Our Stacker engine even temporarily caches each page you’ve visited and simply asks for a new version in the background when you go back to it. This makes navigation back and forth even faster.

Now Stacker is purposely built for the sheet-based UI that we have. It knows about sheet nesting, how to break out of a sheet chain, and more. We therefore have no plans of open sourcing it. But you can get (almost) all the speed benefits of this approach simply by adopting pjax, which is actually where we started for Basecamp Next until we went fancy with Stacker.

#2: Caching TO THE MAX
Stacker can only make things appear so fast. If actions still take 500ms to render, it’s not going to have that ultra snappy feel that Basecamp Next does. To get that sensation, your requests need to take less than 100ms. Once our caches are warm, many of our requests take less than 50ms and some even less than 20ms.

The only way we can get complex pages to take less than 50ms is to make liberal use of caching. We went about forty miles north of liberal and ended up with THE MAX. Every stand-alone piece of content is cached in Basecamp Next. The todo item, the todo lists, the block of todo lists, and the project page that includes all of it.

This Russian doll approach to caching means that even when content changes, you’re not going to throw out the entire cache. Only the bits you need to and then you reuse the rest of the caches that are still good.



This is illustrated in the picture above. If I change todo #45, I’ll have to bust the cache for the todo, the cache for the list, the cache for all todolists, and the cache for the page itself. That sounds terrible on the surface until you realize that everything else is cached as well and can be reused.

So yes, the todolist cache that contains todo #45 is busted, but it can be regenerated cheaply because all the other items on that list are still cached and those caches are still good. So to regenerate the todolist cache, we only pay the price of regenerating todo #45 plus the cost of reading the 7 other caches — which is of course very cheap.

The same plays out for the entire todolist section. We just pay to regenerate todolist #67 and then we read the existing caches of all the other todolist caches that are still good. And again, the same with the project page cache. It’ll just read the caches of discussions etc and not pay to regenerate those.

The entire scheme works under the key-based cache expiration model. There’s nothing to manually expire. When a todo is updated, the updated_at timestamp is touched, which triggers a chain of updates to touch the todolist and then the project. The old caches that will no longer be read are simply left to be automatically garbage collected by memcached when it’s running low on space (which will take a while).

Thou shall share a cache between pages
To improve the likelihood that you’re always going to hit a warm cache, we’re reusing the cached pieces all over the place. There’s one canonical template for each piece of data and we reuse that template in every spot that piece of data could appear. That’s general good practice when it comes to Rails partials, but it becomes paramount when your caching system is bound to it.

Now this is often quite easy. A todo looks the same regardless of where it appears. Here’s the same todo appearing in three different pages all being pulled from the same cache:



The presentation in the first two pages is identical and in the last we’ve just used CSS to bump up the size a bit. But still the same cache.

Now some times this is not as easy. We have audit trails on everything in Basecamp Next and these event lines need to appear in different context and with slight variations on how they’re presented. Here are a few examples:



To allow for these three different representations of the cached HTML, we wrap all the little pieces in different containers that can be turned on/off and styled through CSS:



Thou shall share a cache between people
While sharing caches between pages is reasonably simple, it gets a tad more complicated when you want to share them between users. When you move to a cache system like we have, you can’t do things like if @current_user.admin? or if @todo.created_at.today?. Your caches have to be the same for everyone and not be bound by any conditionals that might change before the cache key does.

This is where a sprinkle of JavaScript comes handy. Instead of embedding the logic in the generation of the template, you decorate it after the fact with JavaScript. The block below shows how that happens.



It’s a cached list of possible recipients of a new message on a given project, but my name is not in it, even though it’s in the cache. That’s because each checkbox is decorated with a data-subscriber-id HTML attribute that corresponds to their user id. The JavaScript reads a cookie that contains the current user’s id, finds the element with a matching data-subscriber-id, and removes it from the DOM. Now all users can share the same user list for notification without seeing their own name on the list.

Combining it all and sprinkling HTTP caching and infinite pages on top
None of these techniques in isolation are enough to produce the super snappy page loads we’ve achieved with Basecamp Next, but in combination they get there. For good measure we’re also diligent about using etags and last-modified headers to further cut down on network traffic. We also use infinite scrolling pages to send smaller chunks.

Getting as far as we’ve gotten with this system would have been close to impossible if we had tried to evolve our way there from Basecamp Classic. This kind of rearchitecture was so fundamental and cuts so deep that we often used it in feature arguments: That’s hard to cache, is there another way to do it?

We’ve made speed the center piece of Basecamp Next. We’re all-in on having one of the fastest web applications out there without killing our development joy by moving everything client-side. We hope you enjoy it!

tl;dr: We made Basecamp Next go woop-woop fast by using a fancy HTML5 feature and some serious elbow grease on them caching wheels

[Show Desktop] Mac版的「顯示桌面」圖示,按一下、將全部視窗縮到最小!

[Show Desktop] Mac版的「顯示桌面」圖示,按一下、將全部視窗縮到最小!

對於Windows的使用者來說,應該都很習慣按一下工具列上的「顯示桌面」按鈕來將全部視窗縮到最小,方便我們快速取用桌面上的檔案或點選圖示。不過在Mac OS X系統中則沒有這樣的設計,雖然也可以用「Command」+「F3」鍵盤快速鍵將全部視窗擺到一旁,不過如果你還是習慣按一下滑鼠就能將全部視窗縮到最小,可以試試看下面這個免費的小軟體。

下面介紹的這個「Show Desktop」小軟體,唯一的功能就是讓我們按下去之後,將全部視窗縮到最小,很簡單,不過卻也很實用。除了可以放在Dock上之外,還可以直接放在桌面最上方的功能表上,可依照個人使用習慣,選擇「顯示桌面」圖示的放置位置。

http://www.everydaysoftware.net/showdesktop/

mac 製作 iso 檔

以下步驟可以把光碟機裡面的光碟燒錄成.iso檔
1) 應用程式- >工具程式->磁碟工具程式
2) 點選左邊的光碟機 (MASHITA DEV-R UJ-898), 點螢幕上方工具列的檔案->新增->"OS_4645"的磁碟印象檔
3) 印象檔格式選擇 DVD/CD母片
4) 儲存為.cdr格式, 燒錄完成後直接更名為 .iso 即可

ref:
Create an ISO image on Mac OS X using built-in Disk Utility App
利用mac自带工作制作iso

3/17/2012

exFAT, fat64

exFAT is supported by MacOS, winXP, win7.
【原創分享】exFAT vs FAT32 vs NTFS @ A-Data 16GB SDHC Class 6 x 華碩 Eee PC 901 簡測報告 (Da Da 寫意空間)

http://www.microsoft.com/downloads/zh-tw/details.aspx?familyid=1cbe3906-ddd1-4ca2-b727-c2dff5e30f61
KB955704:Windows XP 更新
exFAT driver

3/10/2012

pbcopy / pbpaste in Ubuntu (command line clipboard)

pbcopy / pbpaste in Ubuntu (command line clipboard)

In Ubuntu(or any Linux distro with Xwindows), a similar tool is xclip. I like to make this alias:

alias pbcopy='xclip -selection clipboard'
alias pbpaste='xclip -selection clipboard -o'
or the following also works if you would rather use xsel:

alias pbcopy='xsel --clipboard --input'
alias pbpaste='xsel --clipboard --output'
Now you can pipe any text to pbcopy

$ cat ~/.ssh/id_dsa.pub | pbcopy
Your public ssh key is transferred to your clipboard and is ready to be pasted(perhaps with pbpaste).