Tag Archives | PHP

New Hampshire Ruby, 16-Oct-2007: Rails Deployment

Organizer Scott Garman posts:

This month’s NHRuby.org meeting topic will be on deploying Ruby on Rails applications. While back in May our meeting focus was on using Capistrano as our primary deployment tool, this month Scott Garman will be demonstrating a simpler application to manage Rails deployments, called “Vlad the Deployer

Vlad the Deployer “targets the 80% use case” of deployments and boasts an engine written in less than 500 lines of code. Is simpler always better? Drop by our meeting and find out!

Scott will also be discussing various Rails-related tidbits he’s been working with recently.

Anyone who attends the meeting will be offered special coupon codes from Linode if they’re interested. The coupons give you two months of free service after purchasing one month of their Virtual Private Server (VPS) hosting. Linode sponsored the recent Rails Rumble and Nick Plante
had many good things to say about them.

WHEN: Tuesday, October 16, 2007. 7-9 PM.
WHERE: RMC Research Offices, 1000 Market Street, Portsmouth, NH.

For a map and driving directions, see our wiki site

Notes from the NH Ruby/Rails Group, 25-Sept-2007: Scott Garman and Nick Plante

Five attendees made it to the September meeting of the NH Ruby/Rails User Group, held as usual at the well-appointed RMC Research offices, though not on the usual third Tuesday. A round of introductions lead to some vigorous discussions, including how to broadcast and record meetings (perhaps a future meeting will be available via WebEx?), other computing organizations and conferences in the Granite State (GNHLUG, SwANH, the infoeXchange conference, NH High-Tech Council, etc.).

Nick Plante had some great stories about the Ruby Rampage, a 48-hour online programming contest he help run. A large number of teams competed and a fair number actually delivered working applications, some of them looking quite polished. There’s open voting on the most popular applications, and winners will walk away with a number of desirable prizes. Nick’s also been tapped to do the closing presentation at Ruby East, where he will demonstrate some of the more popular applications. There are some great apps there, and some will be providing their source code, though not all. Check them out!

On to the main presentation: Scott and Nick had started a simple Ruby app for NHRuby members to be able to suggest future meeting topics and vote on them, giving the organizers some ideas on what to present next. Tonight, Scott presented the unit, functional and integration test frameworks built into Rails and/or available as add-ons. Nick showed the UI for the voting interface and dove deep into how the form functioned. There was a lot of discussion, with debates on where the boundaries are in unit vs. functional vs. integration, and how Javascript (via the Prototype libraries) can be integrated into the interface and how Rails allows graceful degradation on platforms with Javascript unavailable or disabled. Comparisons were made between the internal, programmer-centric tests provided by Rails and external tests a QA person might run with a tool like Selenium. Lots and lots of great ideas. In a future meeting, Scott and Nick will discuss the next step, deploying the application, perhaps using Vlad the Deployer (ow) and/or the new Capistrano 2.

Thanks to Scott for organizing the meeting, Tim of RMC for providing the meeting space, Scott and Nick for the presentation, Brian for mentioning Selenium and all for their attendance and participation!

New Hampshire Ruby/Rails Group, 25-Sept-2007: Live Coding

Organizer Scott Garmin posts:

“Tomorrow’s NH Ruby/Rails User Group meeting will include a continuation of the live coding project Nick Plante and Scott Garman started during the July meeting. This project was to develop a web application where group members could submit proposed topics for future meetings, and vote on their favorites… This month, Nick Plante will demonstrate how to add the voting system to the application. Nick will use an AJAX-based 5-star voting system that you may have seen on many product review sites… Scott Garman will give an introduction to the Rails testing framework, demonstrating how unit, functional, and integration tests are written. Scott will also demo some useful third-party tools that make testing easier and faster, and how they integrate with the NetBeans IDE.”

WHEN: Tuesday, September 25, 2007. 7-9 PM.
WHERE: RMC Research Offices, 1000 Market Street, Portsmouth, NH.

For a map and driving directions, see our wiki site.

James Fallows (July 24, 2007) – Biting the bullet on Windows Vista: back to XP (Technology)

James Fallow is more formal about it, writing in the Atlantic. James Fallows (July 24, 2007) – Biting the bullet on Windows Vista: back to XP (Technology)

“The other bad call came late last year, when I said that users should wait to buy new computers until the new version of Windows, Vista, was available — and that “of course” they should buy Vista-equipped machines once they could. That was wrong. I apologize.”

Thanks to Ernie for the link!

What I’m listening to…

July has found me working out more often and more consistently. One of the big challenges with staying on an exercise machine is the tedium. It is boring. I’ve found audiocasts have helped me pass the time, occupy my mind and make me feel the time spent is more worthwhile. This month and last, I’ve listened to:

  • The keynote presentations from the RedHat Summit 2007
  • Nearly all the videos from the RedHat site
  • Several weekly Technometria audiocasts
  • David Weinberger on ‘Everything is Miscellaneous
  • Chris Lydon interview David Weinberger
  • David Weinberger interviewed Cory Doctorow
  • Several Boston PHP meetings
  • The Massachusetts Technology Leadership Council’s Open Source Summit presentations (thanks Dan Bricklin!), including discussions on GPL3, the OLPC, Lightning Presentations, and more.

I’ll plug them any chance I get: the GigaVox network has some of the best, most interesting, high-quality audiocasts for techies on the web. I’m a contributing member and I encourage you to do the same.

Brute Force Detection (BFD) script for vsftpd

vsftpd is the “very secure file transfer protocol daemon” and a great product to use for file transfers. Unfortunately, a bunch of script kiddies and zombies runs scripts guessing the 2283 most common user name and password combinations. Sometimes, I’ll see several of these runs of login attempts in a single day, peaking one day at over 13 thousand bogus login attempts. I resent the amount of time, resources, bandwidth and power my server has to spend rejecting these attempts.

Last year, I blogged about the script Brute Force Detection that works with many servers and reads the logs to ban repeated failed login attempts. Unfortunately, it did not have the settings to read vsftpd generated logs, and there were not any directions simple enough for me to understand to set one up. A year passes, I read more, learn more, expecially the great Man Page of the Month sessions at MonadLUG, and I find a couple of hours to hack at this, motivated by yet another log report filled with vsftpd login attempts. Here’s what I did:

BFD uses rules files that are portions of scripts customized for the particular log to read, the messages to look for, and the locations at which the IP addresses of the offending attacker can be found. When each rule file in turn is read into the main BFD script, it becomes part of a set of commands that slices and dices the log, finds the (adjustable) number of excessive attempts, and issues the commands to ban attempts from that IP address. The trick is figuring out what commands you need to implement to return the stream of IP addresses in the correct format. Here’s an example, the sshd rule file:

if [ -f "$REQ" ]; then

ARG_VAL=`$TLOGP $LP $TLOG_TF | grep -w proftpd | grep -iwf $PATTERN_FILE | tr '[]' ' ' | tr -d '()' | awk '{print$10" "$13}' | tr -d ':' | awk '{print$1":"$2}' | grep -E '[0-9]+'`

Boy, is that inscrutable! Here’s a quick tour: REQ is the required file (the binary that runs proftpd) so the script only runs if there is such a file (“fi” is the shell script equivalent of “if” – cute!). The other variables are used to feed the main processing line, starting with ARG_VAL. This line processes the log (named LP) through a series of pipes that filters the result down to the items that need to be processed. Grep processes lines through Globally searching, using Regular Expressions and Prints them through to the next command in the pipe. TR translates characters from one set to another, or -Deletes them. Awk is a simple text processing language, really handing for tricks like printing the tenth and thirteenth words out of a line.

Here’s the trick to working this out: take a log file you know has your suspect violations, use cat to feed it into the beginning of the pipe described above, and add item-by-item to the pipe to figure out what each does and what the final result looks like, in this case a text file IP Addresses and login names, something like:

This is what BFD gets fed bac k to it. Then, it counts the number of attempts, compares that against the TRIG value set above, and if it exceeds the trigger level, executes the command (set in BFD’s configuration file, conf.bfd) to ban the offending attacker. (It also optionally sends an email to the admin, a good idea to ensure you’ve got things set up properly.)

Now, your installation of vsftpd may be a little different from mine, your logs may have different names and columns in different orders, so use this script only after testing out that it works properly with your configuration. Best of luck with it. Here’s my implementation of a script to detect vsftpd script kiddie attacks:

if [ -f "$REQ" ]; then

ARG_VAL=`$TLOGP $LP $TLOG_TF | grep -w vsftpd | grep -i rhost | grep -iwf $PATTERN_FILE | awk '{print $13":"$12}'| tr -d '[]()?@'| cut -d = -f 2,4 | grep -E '[0-9]+'`

The cut command is a new one here: like the use of awk it lets you pick particular columns to slice out of the line, but also gives you the option to specify the delimiter that sets off the columns. In this case, I use cut to pick off the second half of two columns that are formatted as “rhost=” and “ruser=badguy@badplace.com” to pick off the second values from each of those columns.

MonadLUG meeting notes, 14-June-2007: Ed Haynes of WindRiver: real-time and Linux

Bill Sconce posted the notes from the MonadLUG meeting of 14-June-2007, one I had to miss due to client projects. It sounds like it was a really interesting meeting. The push to tweak the kernel of Linux to be responsive in a real-time environment benefits us all, as some portions of that specialized work can be rolled into the main-line kernel code. This is one of the great benefits of Open Source, where developers “scratching their itch” – working on their specific needs – can contribute back to the greater community at little or no cost to them.

I heard a similar sentiment voiced at FUDCon ’07 Boston in presentations about the One Laptop Per Child machines: in tracing down some of the code that was running down the batteries on these cute little laptops, the OLPC crowd found entire classes of code that were working fine on desktop and server machines plugged into the wall, but wasting CPU cycles when a different algorithm could be implemented that was more power-friendly. This doesn’t just benefit the OLPC crowd; some of their work goes back into mainline kernels where it makes everyone’s laptop battery last longer, server stacks idle cooler, requiring less AC power and less Air Conditioning power, lowering the heat-disapation requirements of data centers, and slowing global warming. Yet another case of Open Source saving the world.

You say Framework, I say Toolkit, let’s call the whole thing off

Well, it seems that a million monkeys pounding on a million keyboards will write… a million PHP frameworks. I’ve got a client project that needs a rich client front end, likely with DHTML-Javascript-AJAX, a powerful middle tier with complex business logic and processing, and an interface to the backend data that can both support (and hopefully automate and generate) the dozens of generic CRUD processes but also allow overriding with complex SQL (you know, the nasty, multiple page, outer join, union, correlated subquery, inline-function SQL that takes days to write, debug and document, runs in milliseconds, and makes the whole operation worthwhile). Bonus points for caching at the component level, plugin widgets that do all the latest cool stuff (tags, RSS, digg, widgets, etc.) and a smart graphical IDE that can act as a design surface, debugger and data browser. A good manual available online and on paper, along with an active developer community is essential, too. Oh, and Free as in beer along with Free as in speech is desirable (for the former) and required (for the latter).

A guy can dream, can’t he?

There’s a great comparison chart on 10 PHP frameworks from PHPit.net, although dated last year. The many comments indicate that some folks think some of the features aren’t properly credited or misunderstood. Some posters may disagree on the meanings of “MVC” or “ORM.” Some may disagree on what “is” is. Some, I suspect, are those monkeys typing at keyboards. Others likely have valid points. The chart is 14 months old (March 26, 2006) and not getting any younger, while the frameworks either rocket ahead or wallow in the doldrums. I note, for example, the Zend Framework version 1.0 is in Release Candidate phase, just two weeks ago, a major milestone usually taken with some gravity.

There’s not a lot of discussion on how and why these ten frameworks were chosen. Why not blueshoes or dojo? And how about those CMSes? A number of the more powerful Content Management Systems could serve as the basis for an application: already they have a user gui, a writer/editor/moderator/developer UI, connections to a database, and “stuff” in the middle. How well the stuff is designed and whether it’s flexible enough to fit application logic in there brings into place the philosophical questions of where an application begins and where content management ends. I fear that way leads madness: a tool specifically developed for one purpose stretched into a general-purpose tool can be a rough fit. The closer the designers stayed to their original focus of “delivering content” the less likely it is to be flexible enough.

It looks like I’ve got my work cut out for me sorting the wheat from the chaff… any pointers from readers would be welcomed. The good news: it’s all about choice. Having many choices is great news.

OReilly Radar > Better Gmail

At O’Reilly Radar, Tim O’Reilly points to Paul Kedrosky pointing to Lifehacker’s Better Gmail. The FireFox extension looks like it brings some real power and extensibility to already powerful GMail platform. Tim notes:

A really interesting side note: as Better Gmail is a firefox extension, its not available for IE users. Its an interesting twist on the browser wars. In the old days, Microsoft and Netscape fought to lock in users with incompatible extensions. Here we see the same thing happening simply because that one platform is open and the other is not. The users themselves are evolving the browser.

I agree with Tim’s observation, but cringe at the term “users.” Many years ago I attended a session in Redmond where I heard two ‘Softies talking about the product they were shipping and referring to us as “users.” The product was Visual Studio. We’re not users, I thought, we’re developers! We’re producers. So, “users” aren’t evolving the browser. We need to get out of this “us – them” mentality. We are the users. We are the producers. We make the world we choose to live in, by action or inaction. There are no “users.” Only us.

Okay, enough ranting. GMail extensions look pretty cool. Check them out!

Powered by WordPress. Designed by Woo Themes

This work by Ted Roche is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States.