Maybe that app could help people use pull-requests during editorial reviews, much like how tests help developers with code reviews.
It lead to a rather simple question:
What do I have to do to get the following to work?
OK so that’s what I want to do, I researched and built the pieces but when it came time to actually building the entire node app, I stopped short.
That makes this is a half-baked idea: I’ve verified the individual pieces of tech work, but am stopping because I don’t want to actually build an entire product.
What tech would I need to build in order to do is? Here are the broad strokes:
There’s a rather clever npm module write-good which you can run as a CLI. Example from their docs:
1 2 3 4 5 6 7 |
|
I figure we can use this to test a repo’s README.
We’ll need to get an AccessToken since each of our GitHub API calls will require it. We can either:
For personal access tokens, you can follow instructions here.
For oAuth signin, you can duck-duck-go for “Sign in With GitHub”. In Rails, omniauth is the dominant way forward. In Node/Express land, I found the options less hospitable, but the npm package passport-github2
is very well done.
You’ll want your app to get notified when a pull-request is created or updated so you can run your “test suite”.
I imagine that after your user signs in, you’ll present them with a list of their repos. How do you get their repository list? I used the GraphQL endpoint.
1 2 3 4 |
|
They’d switch a repo on, at which point you would post to GitHub’s WebHook API and register a webhook.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
A fun hack here is to actually post to viasocket.com. It stores and lets you inspect, change, and replay webhooks, when you’re trying to diagnose how to handle a webhook, there’s not much more frustrating than having to recreate entire scenarios to test the typo you just fixed.
Second fun hack is to use https://webhookrelay.com/ or https://ngrok.com to get webhooks to hit your local development system. You’d definitely want to do that here.
OK, so there’s a new pull-request on a repo which your app is watching, and GitHub has told you about it.
We’ll want to notify GitHub that you’re starting a build using the GitHub API (REST v3). That’s what will add our app to the list of builds. GitHub isn’t waiting for us, we tell it when we start a build – this changes the status to “pending”.
1 2 3 4 5 6 7 8 9 10 11 |
|
We’re going to need to clone the repository at the pull-request level to run tests. Which SHA should we use after we clone? We’ll ask GraphQL for what we need.
Note: we’ll know from the webhook which pull request number is in question.
This is a little hand-wavey… But imagine we have a server with docker installed. We can build an image that has our write-good npm package installed. We can then run a docker container and:
We then capture our build status and store it against our BuildID.
Notify GitHub that the build is over and either passed or failed.
1 2 3 4 5 6 7 8 9 10 11 |
|
I’m sure a ton. After all, I just built the pieces of tech, and didn’t tie them together at all. But it sure was fun!
All of the code I used ended up being stored in Gists, and generally as a Git Repo.
The GitHub API Endpoints we end up using:
PS: you should absolutely use the GitHub GraphQL API Explorer to explore the GraphQL. It’s awesome once you get one specific thing to work and then you can edit and try other things.
]]>There is too much to learn with far too few years to learn. The only way I’ve been able to be ok with this fact is to schedule my learning, learn by doing, and then reading the books on the subject. Note: read the books, not the blogs, is a technique by Chris Oakman and it’s helped me dive into the theory behind the tech. The tech changes, but the theory stays the same.
That said, this is probably a very hip-javascript heavy post. If you’re suffering from JavaScript fatigue, this may or may not help.
With learningyear 2015-2016, I’ve devoted most of my learning to two separate sets of technology:
Upgrading my JavaScript to 2016 era JavaScript Front-End
Exploring Elixir/Phoenix and other functional languages
Front-end is fun, sticking to 2010 era jQuery is a mess, and functional components, surrounded by imperative containers, is the past and the future.
I started with learning React so that I could explore React-Native. This was a bit like learning how to cook by starting with a Cajun Roux (and filling the house with smoke) when you’re 15. Which: yeah, I totally did. Small steps, I’m not a fan of.
I had been pretty heavy with Ember, running Houston-Ember, and building apps with it. After the ember team declared they would focus on the web-experience (instead of mobile), I started looking for something to help solve the cross-platform development experience. React-Native appeared to do so, and I built a couple of apps with it, not really knowing what I was doing.
You know: change something and see what happens development.
I saw react-rails, and began to see the potential for a mixture of Rails as the API and React components managing their view and their state at the same time. Presentation: Given. Apps: built.
But I missed the ease in which you could import modules and dependencies in front-end apps versus the flips you had to do to get Sprockets to work in Rails.
Recently, I explored getting webpack or browserify working in a Rails project. The progression at which I tried to get Rails’ to behave with the node ecosystem:
shakacode’s react_on_rails: complex, thorough, and was difficult for me to understand what’s going on.
Browserify with Rails: excellent tutorial by Reax. It was good, but I wanted to learn webpack.
Jay Morlan: simple webpack integration with Rails.
I haven’t replaced the Asset Pipeline, I’ve enhanced it with a webpack process that builds out app/frontend
into a bundle, which Rails slurps up and uses with jquery-ujs, action-cable, and other Railsy front-end things. Jay is going to give a talk at Houston-Ruby about this soon, which I’m excited about.
Both elm and elixir are functional languages, and I had a blast writing with both of them. I have friends who have jumped all in on the elixir train, with its millisecond response times, coherent coding style, and Rails like Phoenix framework.
I built out an API for a React-Native mobile app and thoroughly enjoyed the experience. It was a hop-skip-and-a-jump to learn Elm, a front-end functional language that compiles down to JavaScript.
I wrote Hipstack.me in elm; a fun project that allows me to rewrite the entire application every six months in whatever language is hippest at the time. Sort of a TodoMVC, but TodoMVC is a little too popular these days.
Create-React-App Your starting point when creating a React app.
Redux: This full-stack redux tutorial, which should have been a book, and I should have paid for it, finally got redux to stick. (Note: you don’t need redux/react-router to learn react, but it’s nice when you get 3+ components that aren’t parent child).
elm: After hearing that Redux was inspired by elm, I dove in with the (shockingly free, excellent) elm for beginners course. Highly recommended.
Elixir: Essential Elixir
Phoenix: Build an app with Phoenix and Ecto for Beginners.
I’m exploring data-science, both the correlation/statistics/machine-learning aspect as well as the big-data side of the house. I’m not sure I believe in “AI” as traditionally taught, but I absolutely believe in data and our ability to make guesses based upon said data.
PSA to my fellow devs: AI skills are bleeding edge right now, but they’ll be *table stakes* in a few short years. Invest in yourself now.
— Jerod Santo (@jerodsanto) June 27, 2016
I’m also going to give Go another chance. Working through exercism’s examples in Go is super cool. The static code analysis helps you learn the “go” way to do things.
]]>Five cohorts followed; today my seventh, and final, immersive cohort will present their final projects and begin the next phase of their live. I look back and remember:
I want to thank you all; I was a smart part of your journey, and you were a big part of mine.
Overall, I’ve graduated nearly 100 students over 7 cohorts. I started at the Iron Yard in 2014 with a healthy skepticism on code-schools – I thought, ‘hey JWo, you’ll be able to help with things you care deeply about.’ Namely:
Being a part of something that actually matters to you is an experience I highly recommend. I spent the first 18 years of my career trading my time and ability to tell computers what to do in exchange for money. You know, as you do.
But being able to do so AND help people AND align with your core desires – it’s an emotional experience I cannot fully describe.
I’m proud of what I’ve done at The Iron Yard in Houston, believe whole-heartily in TIY’s ability to affect real and lasting change for people, and want to expand the scope of the people I can help.
On Monday, I’ll start a new position at The Iron Yard as the Director of Back-End Engineering. I’ll lead the effort to wrangle the curriculum for the back-end stacks we teach at TIY nationally: Rails, .NET, Java, and Python, as well as help develop curriculum for non-immersive courses.
I hope to be able to expand the people we can help, including those who cannot take a 12 week immersive code school. Additionally, I’m excited to experiment with other formats and show off the technology TIY’s internal tech team has built. So Excite. Such potential. Very future.
My goal: as bittersweet as it is to finish leading 15 students on a twelve week journey, I hope to add some zeroes to that number. At 20 campuses (currently), The Iron Yard is the largest codeschool in the country; helping instructors help students has the potential to change even more lives.
I’m like this:
No really, see:
]]>|>
. It certainly makes code easier to read, but it’s a bit
difficult to grep for the first time.
Let’s use the Turducken to explain it. For the uninitiated, A Turducken is a turkey stuffed with a duck which is itself stuffed in a chicken.
Code written sans pipe might look like:
You have to read this from the inside out. You:
Turducken.stuff("chicken")
Turducken.stuff(_, "duck")
Turducken.stuff(_, "turkey")
Better code might be to follow the transformation from inside to out (left to right):
Full code*
You can run this on the elixir playground
In UNIX, we use pipe to take output from one operation into the other. Example: We’ll copy the contents of a file to the Mac clipboard.
The output of yolo goes into pbcopy. If this was a program (ruby), it could be:
In Elixir, you could write it as:
Kickstarter open sourced their rack middleware designed to protect your web app from bad clients, named rack-attack.
It works by cutting off requests early in the process, a couple of milliseconds in, and returning a 429 Too Many Requests status.
In a recent project, web utilization was crossing 90% with database utilization approacing 60%. Memory utilization was creeping and web response times were crossing 5 seconds. Things were not happy in Rails land.
An inspection of the logs and skylight performance monitoring brought three things to my attention:
This is my first line of defense – can I limit a user to a request per second? The rate you choose is pretty much up to you; I went with 300 over a 5 minute period of time. This is 5 requests per second per IP.
This particular app has two types of requests I didn’t want to rate limit:
/check
on the server to check if it’s
still online. We’ll let those go though unimpeded.Rack::Attack.throttle('req/ip', limit: 300, period: 5.minutes) do |req| req.remote_ip if ['/assets', '/check'].any? {|path| req.path.starts_with? path } end
In my #2 above, I blocked the two bad actor IP addresses. IP Addresses blurred because of obvs reasons (obvs).
Rack::Attack.blacklist('block bad actors') do |req| ['10.1.1.1', '10.1.1.2'].include? req.ip end
I’ve seen rate limiters let Googlebot by by default. Because why block google when you want Google to visit your literally as often as possible because SEO?
Knowing this information, it’s likely that as an evil-doer-scraper you’ll set your user-agent to match Googlebot’s to maximize the chance you’ll be let in, rate-limits be foresaken and ignored.
It’s vitaly important to let actual Googlebots through to your site, but what about fake lying liar googlebots? Those we want to 429.
Google helpfully published Verifying Googlebot which states the following:
host the-ip-address
it will return the host for that ip. Such as
crawl-66-249-66-1.googlebot.com
googlebot.com
or google.com
host crawl-66-249-66-1.googlebot.com
it should match the
the-ip-address
you started withSoooooo, if a HTTP request proclaims itself as a Googlebot user-agent, we could
use the Resolv
library in ruby to verify it. Resolv is concurrent and does not
block the world \o/
require 'resolv' Rack::Attack.blacklist('googlebots who are not googlebots') do |req| if req.user_agent =~ /Googlebot/i begin name = Resolv.getname(req.ip.to_s) reversed_ip = Resolv.getaddress(name) resolves_correctly = name.end_with?("googlebot.com") || name.end_with?("google.com") reverse_resolves = reversed_ip == req.ip.to_s is_google = resolves_correctly && reverse_resolves !is_google rescue Resolv::ResolvError true end end end
If you use HAProxy, this code is for you! Generally, your load balancer (HAProxy, heroku, etc) might present itself as the IP address the request is coming from. You do not want to rate limit based on requests from the proxy.
We’ll change all req.ip
to req.remote_ip
and add this code which looks
for the HTTP_X_FORWARDED_FOR
header added by most load balancers. If not
found, it will default to the IP.
class Rack::Attack class Request < ::Rack::Request def remote_ip @remote_ip ||= (env['HTTP_X_FORWARDED_FOR'] || ip).to_s end end end
Final Code to make the awesome happen:
Writing is day dreaming, when no words come at all. The rushing water has stopped, the creek dry and crackling.
Writing is using your own voice and avoiding sounding important or intellectual. Writing is real.
Writing happens in the morning for me. I imagine my brain is still excited about the day in the morning, and like sushi, degrades the further it gets from sleep. I should look into napping. You should find the part of the day you are most creative, and write then. AB test it. Trial and error.
Many developers spend days, weeks even, on getting the correct Tool. I have used Pages, Desk.pm, IA Writer, Scrivener, and more authoring tools I can remember. They do not do the work for you, and in general do not make anything easier.
You can also spend days, and weeks, on your toolkit to transfer your words into an ebook format. Or, your site’s marketing website.
But here’s the truth: you are ignoring the part that matters: getting words out of your brain and onto ‘paper’. Stop with the tools, and just write. Use softcover or leanpub - both will create a page for you so you can focus on the writing.
Fiddling with tools instead of writing is wankery distraction.
I’m currently putting my ideas on writing into action creating Tex Mex Consulting. Some of my favorite articles:
]]>Most software developers can learn a hellaton from freelancing and consulting; at the very least you can learn what you’re worth. And it’s not $75,000 a year with free soda and 1% equity.
I’ve spent 6 years and many thousands of dollars in mistakes, masterclasses, and good old sweat equity in created a workflow system for expert software consulting.
I want to give you a path to consulting; a path to go from Employee to Freelancer to Consultant. A path to freedom, more money, and more respect.
The basic ethos of the book:
Freelancers are hired by companies. Consultants choose their clients carefully.
Want to try out some the techniques in my system? I’ve created a 30 day free email course on Professional Freelancing – try it out below.
Check out the system for Professional Freelancing at Tex Mex Consulting.
]]>In that time, pre-1.0, Ember’s API would change constantly. Most people saw this as a flaw, after all, it meant that documentation, blogs, and stack overflow searches would frequently reference older versions. The router saw the biggest changes, frequently.
The benefit of this – Ember got the API right. Once it hit 1.0, the API was on point; and with the correct API in place, Ember keeps getting better. Ember is able to make changes under the hood, applying React style rendering, performance enhancements, and generally get better and better over time.
My experience with ember is generally along the lines of Deleting more code than I add; Ember tends to add what I need overtime and I use that instead of my code.
Told over a series of many (perhaps too many) tweets, here is the story of my transition from Ember to Angular to Ember.
It's not quite "the first time you saw rails" tingly, but I have the tingles about emberjs. Well played @wycats , well played.
— Jesse Wolgamott (@jwo) December 14, 2011
I live-coded some ember at Houston Code Camp and it didn’t go very well. Too many rough edges, mostly with ember-data.
Speaking of @houstoncodecamp, I'll be live coding an @emberjs app with rails as the API. Come experience the magic! cc: @HoustonJS
— Jesse Wolgamott (@jwo) August 2, 2012
@tehviking I like that viewpoint. (I like both angular and ember)
— Jesse Wolgamott (@jwo) April 11, 2013
Love both angular and ember. excited to see which ages better.
— Jesse Wolgamott (@jwo) August 27, 2013
@garrettdimon angular fits into your rails app. Your rails all fits into ember.
— Jesse Wolgamott (@jwo) August 25, 2013
Both great. Ember takes more thought, may be better long run
During 2013, JB and I wrote and published AngularJS + Rails. Angular was very pragmatic and got some things done. Others, like directives and services, seemed half-baked.
#Realtalk Sooo jb and I bet on angular, with good success; my timeline is full of ember-love and I'm planning to revisit embah.
— Jesse Wolgamott (@jwo) March 28, 2014
I read about the community love at EmberConf 2014 and was insta-jealous. I remember what I love about Rails and Ruby was the community; decide to revist Ember in earnest and see how things are rolling along.
.@iwarshak to be honest, I’m amazed at ember now vs nov 2012. I’m using on a project and have lots of love toward @emberjs
— Jesse Wolgamott (@jwo) April 29, 2014
. @hkarthik yep, I use both now. Angular to enhance an existing app. Ember for greenfield.
— Jesse Wolgamott (@jwo) August 11, 2014
Preview of @emberjs app I built for my college fantasy football draft. So simple. So fast. So easy. Blogpost soon. pic.twitter.com/gkBef9W9un
— Jesse Wolgamott (@jwo) August 27, 2014
Also had super awesome fun times with ember and cordova. This is such a stack of win.
— Jesse Wolgamott (@jwo) December 29, 2014
Quick look of what I did to filter by company and/or first/last name. joins arel together / to_sql (rails and ember) https://t.co/AsoZqJpdXx
— Jesse Wolgamott (@jwo) December 3, 2014
I dive into ember-cli; it’s the final piece of the ember puzzle. Followed by ember-addons and quite simply amazing.
ember-cli: Getting Started With the Awesome http://t.co/M0XVHVm8YO | more reasons to love @emberjs
— Jesse Wolgamott (@jwo) October 20, 2014
Angular announced their 2.0 backward compatibility breakathon apocalypse. I finalize what’s been happening for a year, and break up with Angular for Ember. #sorrynotsorry
Watching Portlandia, getting moar excitet about @EmberConf which OH BTW is is Portland. :boom:
— Jesse Wolgamott (@jwo) February 13, 2015
EmberConf 2015. Such amazing awesomeness.
A collection of links that cover what happened during EmberConf 2015. https://t.co/IYZvxBwlWN #emberconf :: amaze. thanks @sugarpirate_
— Jesse Wolgamott (@jwo) March 5, 2015
Can’t wait to see what happens in 2015 and beyond.
PS. The actual 1000 days is Tue, 09 Sep 2014; I had built my 3rd app by this time. I doubt it’ll take you 1000 days to fall in love. Want to try? Check out the CodeSchool Course, ember-cli, and the ember guides.
PPS: I created this using a combination of t and the twitter archive. (t only returned 3200 tweets in a search).
]]>Alternate title: “Dancing with Dragons, or, how I got screwed by ActiveRecord Callbacks yet again.”
I’ve been working on the following scenario:
Fairly simple, but here’s what’s going on:
new_photo
channel. The JSON we’re sending is the same JSON that ActiveModelSerializer will use in the API. WIN.The JSON that comes through has something rather odd for large_image_url
- it’s a local path to the /tmp/upload directory, which absolutely does not work. :/
Basically, when Pusher is sending PhotoSerializer.new(self).attributes, the image has not been stored up to S3. We want to do Pusher later, after fog does it’s thing.
Callbacks are run in this order:
So, after_create
is occurring before after_save
. Let’s switch that up and our problems should be solved:
Result: Same thing. huh?
To see the order in which callbacks are run, we can do this:
That will show the name of the callbacks being called, and will look something like:
That 70220584822820 is our block callback. And it’s first. Why is it first? Can we get it to go last? Callbacks are run in a last-first order. Further, changing it to something like the following also doesn’t work.
We need to store the image before we get the attributes. So, let’s add that store_image!
callback we saw in the list of callbacks. This will result in store_image!
being called twice, but it won’t actually store the image twice — the second call is basically a no-op.
CALLBACKS!!!
It reminds me of a story about a younger Rails developer warning other Rails devs about the dangers of Callbacks, and how they tend to screw you over.
]]>I found Rails. I found open source. I found Ruby. And I found myself.
Ruby/Rails to me: a community where people help each other learn, full of interesting and fun challenges, and a desire to learn the entire stack from UNIX to CSS.
Fast forward 5 years; I’ve been thinking about what the next 5 years will look like, for both me specifically and for web/mobile application in general.
It’s fairly clear to me that most medium to large’ish apps won’t be a single Rails app with 100 controllers. That dog has tried to hunt, and well… that dog won’t hunt. OneLargeRailsApp is not the future.
Rails isn’t going away. I will still use it for prototyping and for applications where expressiveness is key (billing code and administration of data are two examples). Rails will become part of the puzzle, not the whole thing. I don’t envision anything being the entire puzzle anymore. Too much awesome tech which each do one thing super awesomely.
gsub/Go/Node|Elixir/
I don’t anticipate quote-unquote leaving Rails. There’s no need to leave like there was a need for me to flee .NET. Instead, the communities will merge together under an open-source umbrella. I doubt there will be one big huge community in which you do ALL of your development.
The future, in my opinion, is many languages. Many communities. Many meetups.
Bring on the future!
(and my hoverboards. and jetpacks. and flying cars)
]]>So excited by how Ember CLI is
shaping up. This is the story I've been wanting to tell about developing web
apps for the last 3 years.
— Tom Dale (@tomdale) October 18,
2014
What does ember-cli give you over other-tool.js?
** Tl;dr ember-cli is the awesome **
However: There is a confusion generating getting-started period where you might be confused by some of the conventions. (confession: I had to fiddle around to figure stuff out. Hopefully, you dear reader won’t have to do so).
If you’ve used ember before in Rails or Lineman3, moving to ember-cli could confuse you a bit — ember-cli uses ES6 modules, which you may have never seen before.
Prior to ember-cli, you may have declared your ember data models in this fashion:
** app/js/models/user.js **
Things to notice: in this file, we assume that Ember and Ember Data have already been loaded, and are in the global namespace ready for use.
ES6, on the other hand, requires you to be explicit about what you want to use — you import namespaces, name them what you want, and then use them.
** app/models/recipe.js **
We import the “DS” from ember-data, and it exports the object we define. What we notice:
Coming from Rails, the first thing I want to do is work with Sass. I found a couple of loops you had to jump through to get your .scss working again.
If you’re new to npm, the “–save-dev” means “use only in this project”.
Make it look like this:
This will create an asset-pipeline of sorts for you. Broccoli will compile any sass files in app/styles and vendor/css, and compile app.scss into “assets/app.css”.
Generally, this is what you want.
Move app/styles/app.css to app/styles/app.scss. You can now use scss all you like:
(We’ll use Bourbon in a future article)
This will create app/routes/index.js (and create “routes” directory for you, cool). We’ll change the default to the standard ember starter:
And we’ll create a template to show.
This created app/templates/index.hbs. We’ll change its contents to:
If things are good to go, you’ll see the colors listed out on the screen. WOAH, the power.
This creates a “dist” directory with “index.html” and other static assets. We can deploy this to S3, or anywhere else. TOTES AWESOME.
If we run into CORS problems, or generally want to use Heroku, there’s a build pack for maximum awesome.
At this point, you have an ember app, build with ember-cli, hosted on Heroku.
We can set an API Proxy to get around CORS problems:
heroku config:set API_URL=http://api.example.com/
If you want to checkout a repo where this is hooked up 3
My talk on building hybrid apps with ember is up! http://t.co/iKOB75Dgex Presented for @HoustonJS at @PoeticSystems
— Jake Craige (@jakecraige) October 3, 2014
(seriously, check out emberaddons.com), ↩
Except: I have dot files and that makes it easier. After copying my home directory over, re-dotfiling, things seemed good.
Except: my postgres databases. How to get them all from Air A to Air B without copying every.single.one.
Enter: pg_dumpall. It’ll dump every database into a file, with which you import on new computer.
When moving to a new computer
(Move file to to new computer)
if you have trouble with “database $username not found”, type in “createdb”
To confirm,
And then \l
to list the databases
In March, I flew to Greenville, South Carolina, to experience a course first hand. The cohorts were 8 weeks into their JavaScript course, and amazed me with their abilities. This was the real deal (discussing how to make Backbone models more maintainable). The Iron Yard came to Houston, and asked me to teach the Rails Engineering course. I kept looking for a reason to say no …
The Iron Yard’s is student focused to its core. Their formula is not only badass but is also designed for optimal awesome for the students. Reasonable rates, fostering community involvement, mock interviews, and a 100% guarantee. I had no more “no’s” to give.
I’m teaching a Ruby heavy Rails Engineering course. Matt Keas is leading the front end course (JavaScripts).
My goal? Give people the opportunity to learn how to have a much happier career. All while using he happiest stack I’ve encountered yet: Rails+. Rails+: Rails as the API backend or Rails as the prototype. We’ll focus heavily on Ruby, only getting into Rails after we’ve created console and sinatra apps. Stretch goal: EmberJS.
And the coolest thing? Instructors send a weekly recap of what their cohorts are doing. Awesomely, the other instructors believe as I do students should learn Ruby. After all, Rails isn’t magic, it’s Ruby.
Since everything is bigger in Texas, we’re hosting a free coding event, Tech-sas. We’ve sold out our venue, which is pretty freaking awesome. Over 200 people, both with some coding experience and total beginners, will learn how to build awesome things. (Free). (Soldout). ( :O )
Volunteering at the Austin KidsCodeCamp at RailsConf (Austin) was a changing experience. From 8-15 years old, kids were learning to program using Scratch, blew their minds wide open. Especially Robots – So much excitement, very potential.
The Iron Yard has (free) code schools for kids all over the south and southeast.
]]>My initial reaction: Increase the memories!
I looked deeper into the problem, and along with some colleagues, discovered that at midnight our server was kicking off 9 different processes at the same time. 9x Rails is just about 8x too many.
My whenever schedule (names changed to reflect tex-mex dishes)
set :output, 'log/cron.log' job_type :rake, "cd :path && RAILS_ENV=:environment bundle exec rake :task :output" every 1.day, :at => '12am' do rake 'margarita:enchiladas' rake 'margarita:fajitas' end every 1.minute do rake 'guacamole:soft_tacos' end every 12.hours do rake 'guacamole:tortilla_soup' rake 'guacamole:grande_combo_de_tejas' end
Seems pretty standard for how I organize and schedule tasks. But each of these would be (and were) running at midnight, since they all intersected that particular intersection of space and time.
Instead, if we spread this around a bit, we could still get the business requirements accomplished:
The timing, other than that, didn’t matter. And in your apps, it probably doesn’t matter that often either.
An updated “Hand Crafted, Artisan Tex Mex Whenever File”
set :output, 'log/cron.log' job_type :rake, "cd :path && RAILS_ENV=:environment bundle exec rake :task :output" every 1.day, :at => '12am' do rake 'margarita:enchiladas' rake 'margarita:fajitas' end # skip the top of the hour. Every 5 minutes every '5,10,15,20,25,30,35,40,45,50,55 * * * *' do rake 'guacamole:soft_tacos' end every 1.day, :at => ['3am', '3pm'] do rake 'guacamole:tortilla_soup guacamole:grande_combo_de_tejas' end
What did this gain us?
tortilla_soup
and grande_combo_de_tejas
running at the same time, they’ll now run sequentially (saves on the RAM) soft_tacos
skips the midnight runReminder: Hand craft your whenever cron jobs. And to test the output: whenever
0 0 * * * /bin/bash -l -c 'cd /Users/jwo/Projects/texmex && RAILS_ENV=production bundle exec rake margarita:enchiladas >> log/cron.log 2>&1' 0 0 * * * /bin/bash -l -c 'cd /Users/jwo/Projects/texmex && RAILS_ENV=production bundle exec rake margarita:fajitas >> log/cron.log 2>&1' 5,10,15,20,25,30,35,40,45,50,55 * * * * /bin/bash -l -c 'cd /Users/jwo/Projects/texmex && RAILS_ENV=production bundle exec rake guacamole:start_guacamole >> log/cron.log 2>&1' 0 3,15 * * * /bin/bash -l -c 'cd /Users/jwo/Projects/texmex && RAILS_ENV=production bundle exec rake guacamole:tortilla_soup guacamole:grande_combo_de_tejas >> log/cron.log 2>&1'
More on Cron and whenever:
]]>First, the details about my talk. I’ve been teaching Ruby off Rails for a year now. I heart Ruby like a hundred <3s and think we can show off Ruby more than just showing Rails APIs to developers.
My description of Teaching Ruby without Rails
What are the essential elements of Ruby that an artisan developer ninja developer needs to understand before they can see the beauty of Ruby? After all: there was Ruby before there was Rails; There is Ruby outside of Rails.
Let’s cover examples of how to teach Blocks, Send, Class Eval, and Modules to developers who can develop, but not in Ruby land (yet). And, how these 4 features of Ruby can enlighten Rails and DSLs.
In the ‘Why should we choose you’ section, I added:
Honestly? I don’t know if you should… I’m a pretty good speaker and people have been way supportive of my teaching Ruby without the Rails.
How about why you SHOULDN’T choose me, eh?
- I’m a white male from the US.
- I’m not an A-list Ruby developer
- I probably won’t get someone to buy tickets that wouldn’t buy it already
However, some things that MAY tip the scales in my favor:
- I don’t use like a billionty meme’s in my talks
- I’m decently funny and hopefully inspiring
- I like <3 tacos, Ruby, and Whiskey.
I also included a link to my Rails Ignite 2012 talk. I submitted and waited. In late January, I received an email letting me know I wasn’t selected. It was your standard rejection letter.
OK, I thought – and didn’t buy my plane ticket. I was pretty surprised when I received the explanation letter on March 3rd. Here it is:
First of all thank you for taking your time and submitting proposal. We have received more than 90 and had to select only 7 of them. It wasn’t an easy task and I’d like to give you a little feedback on why we didn’t choose ones that you had submitted.
You’ve submitted : - Teaching Ruby without Rails
Hmm tough call. You’ve convinced us that you are a good speaker. However the talk proposal lacked the “wow” factor and might not fit into 20 minutes.
The talk sounds kind of interesting and I’m sure it can get accepted by plenty of conferences.
I missed this completely, 20 minutes is a good speech length, not enough time to go over what I said I would).
True enough, reading back over it. I should have submitted something like ‘A Lambda, a Proc, and a Block walk into a bar’… and talk about how their similarities and differences).
I’m sending enough information in my submissions to convey I’m decent on stage)
Feedback like this would be very very awesome to send out to people who asked to speak at your conferences. Brutal honesty is best: otherwise the speakers won’t know what to change to get better.
]]>We’ve all been there – you have code that needs to be run, and it’s taking forever. You wish there was a way to speed things along, but you can’t tweak the algorithm. You read about multi-threading but hear tales of dragons, pirates, and warnings of people who have ventured before you never to return.
But fear not! Celluloid exists, and is awesome. It’s an actor based implementation, but all you need to know is that it’s awesome. You can process an array of things in parallel, and continue when it’s complete.
Let’s say you started with the task to see which servers are alive and which are not-so-much-alive:
servers = Server.all.map do |server| server.status = check_status(server) end # do something with the non-responsive servers
So this would map over all servers, and make a network call to check if it’s alive.
In a normal non-parallel world, if each status call would take 0.1 seconds, then 10 servers would take 1 second, and 10000 servers would take 1000 seconds. Sub-optimal.
If you execute them in parallel, then goodness happens. Under the hood, celluloid-pmap uses Celluloid::Future’s for each element in the array. The pmap will wait until the value is complete before returning, and we’ll wait for them all to finish before continuing.
That same example in parallel would look:
Server.all.pmap do |user| #… same code here end
Let’s say you need to create a PDF report for a set of users, store them at S3, and download them for you to give to your users when it’s complete. This is a great case for parallel processing since you’ll be waiting on:
With the example below, your total processing time gets cut to the slowest report generated/uploaded/downloaded.
["email1@example.com", "email2@example.com"].pmap do |email| user = User.find_by_email!(email) CreatesReports.new(user).generate_reports user.reports.each{|report| `curl -o #{report.filename} \"#{report.pdf.url}\"`} puts "reports ready for #{email}" end puts "Everybody's done!"
Other examples of usages:
If you are iterating over a set of documents and calling any resource that has a limited connection set, or a rate limit, you might run into a Connection Pool problem. That is, you might try connect with 20 connections and your default pool size is 5. What can be done?
celluloid-pmap uses a Celluoid Supervisor to set a maximum number of actors working at the same time. So if you can only connect to 5 postgres users at the same time, you can set that like so:
users.pmap(5) {|user| user.say_anything! }
The (5) argument will say it’s OK to use as many as 5 actors at once. By default, celluloid-pmap will default to the number of cores your machine has.
I started adding code into a Rails initializer from Celluloid Simple Pmap example. It went like this:
config/initializers/celluloid-pmap.rb
module Enumerable def pmap(&block) futures = map { |elem| Celluloid::Future.new(elem, &block) } futures.map { |future| future.value } end end
This worked very well, but I found I was adding it to every.single.project. So I worked up an example to have a Supervisor to help with connection pooling and rate limiting, and bam… a gem was born.
Installation and configuration is over at https://github.com/jwo/celluloid-pmap. But it’s as simple as you’d think:
gem install celluloid-pmap
]]>If you are a guy and you want to help fight the good fight, tell people why you want diversity and not dickery, from your point of view.
This comes after:
And this is in the last week.
So. Why do I want diversity (and not dickery) in our field? a) it makes economic sense, b) we need to create better software and more of the same probably isn’t going to help, and c) it’s the human thing to do.
Software Development pays very well, and is in desperate need of qualified developers. The more highly paying jobs that exist increase the amount of money available to be spent on subscription codecasts, ebooks, and training materials. If there’s more money in the system, there’s more opportunities for everyone to take home more money. A rising tide lifts all boats.
Given that rather simple assertion, it does not make economic sense to drive people out of our industry. We should be welcoming people and making changes to our processes to become happier and more productive. We should be going out of our way to have a society that anyone can join if it interests them. We do not do this as an industry, and that does not make macro or micro-economic sense.
Given the extraordinary and unsatiable appetite our industry has for developers, it makes sense to teach people with an aptitude and interest for programming to do so. It is a way for low income families to have a better life. We should encourage this.
When Women Make More Money, Everybody Wins
Software gets better when we can break outside of our mental models and solve problems using a different mindset. What a fully functioning group needs is not 10 “rockstars” who think the same – that tends to lead to group think and programs that solve the wrong problems.
What groups need is diversity in world views, shared experiences, and cultural references. If you want to build an app that only people in your specific niche in the world truly get, you should eschew diversity. If you want to have as many people in the world use your software, you should embrace diversity.
Women are making robots more humane
Things I believe in, an incomplete list:
It is wrong to treat someone differently because of {skin-color | gender | sexuality | anything}. |
I believe it is no longer acceptable to sit back and say the status quo is good enough. Because it is not good enough. From healthcare to income inequality to a growing police state – the status quo is not good enough and we should not be defending it.
Think of how people were treated fifty years ago — we’ve changed since then — but think about how you consider people who defended the status quo in the 1960s (my assumption here is that you do not think fondly of them).
Be the person you want 2063-you to be proud of. I do not see any possible way that includes treating women as if they don’t belong in any profession.
Don’t we (collectively) believe that happier developers make for more productive developers? Don’t we want our software to make the world a better place? That starts with treating all people as human. Next, go out of your way to help.
Ways to help:
I have disabled comments; To continue the discussion, let’s talk on twitter – or better yet, write a blog post in response and/or in agreement.
[1] In fact, with the advancements that young girls have over young boys in math and science, one could conclude that we are losing our best developers before they get started. Debunking Myths about Gender and Mathematics Performance
]]>sake
Defunkt wrote sake in ought-eight (2008) for system wide rake tasks. This isn’t it; this is just a file, name it what you want.
I’ll use rake tasks to do data-migrations that I don’t want to stick around in db/migrations. Or to do one-off things like re-process images.
But it’s not all that awesome to actually run those things in production when you need to. Here’s a way!
Then, make sure you require it
And that’s it! Now, when you want to run that code, say to migrate a database:
cap sake:invoke task="db:migrate"
This is not mind-blowing, but it’s not very obvious for new deployers, so I wanted to add it here for the googlers.
Don’t name your task rake – newer versions of rake (0.9.2) are not pleased with that. That’s how I ended on sake. (rake => sake).
This answer on Stack Overflow had some good stuffs. I modified from there.
]]>So… Here’s a solution to fast-compile your assets by only checking if the changeset includes changes under:
The assets is pretty self-explanatory. We compile if the Gemfile.lock changed to catch if something like twitter-bootstrap-rails was added or updated. At first the config/routes.rb seems out of place, but it’s in case you load an engine (or remove one).
Make sure you’re loading deploy/assets
The regular assets:precompile task is getting overridden by our custom task. We then:
If there is, then we call the rake assets:precompile (like calling super
)
triggering after callbacks for `deploy:update_code' * executing `deploy:assets:precompile' * executing "cat /home/deployer/apps/yourapp/current/REVISION" servers: ["yourapp.comalproductions.com"] [yourapp.comalproductions.com] executing command command finished in 921ms * executing "cd /home/deployer/apps/yourapp/releases/20120903203634 && git log 2e03e2b18530c86a69ba8a2c2d75909142767f5b.. app/assets lib/assets vendor/assets Gemfile.lock config/routes.rb | wc -l" servers: ["yourapp.comalproductions.com"] [yourapp.comalproductions.com] executing command command finished in 400ms ** Skipping asset pre-compilation because there were no asset changes
PROOF: 400ms < 2.5 minutes
This Gist by @xdite is the direct source for the code. I’ve used this in several projects and think it’s awesome-sauce.
]]>The Ruby online training course starts July 31, and I’ll keep applications open through August 3rd Thursday Morning, August 2nd. It’s a self-paced course, so if you start this week you won’t be behind. I’ll be granting scholarships to the candidates I feel have the best chance to succeed and will have the most impact. Obviously subjective, but I think this will work. Apply and learn how to be a happy programmer!
So…
You can learn more about the course at http://rubyoffrails.com
UPDATE:
I was asked what my relationship is to RubyOffRails: I created RubyOffRails in April 2012 and run the course: the video codecasts, discussions, and code reviews are all me.