Planet Perl

March 21, 2011

v
^
x

Perl NOC LogPlanet Perl is going dormant

Planet Perl is going dormant.  This will be the last post there for a while.

image from planet.perl.org

Why?  There are better ways to get your Perl blog fix these days.

You might enjoy some of the following:

Will Planet Perl awaken again in the future?  It might!  The universe is a big place, filled with interesting places, people and things.  You never know what might happen, so keep your towel handy.  

by Robert at March 21, 2011 02:04 UTC

v
^
x

Ricardo Signesimproving on my little wooden "miniatures"

A few years ago, I wrote about cheap wooden discs as D&D minis, and I've been using them ever since. They do a great job, and cost nearly nothing. For the most part, we've used a few for the PCs, marked with the characters' initials, and the rest for NPCs and enemies, usually marked with numbers.

With D&D 4E, we've tended to have combats with more and more varied enemies. (Minions are wonderful things.) Numbering has become insufficient. It's too hard to remember what numbers are what monster, and to keep initiative order separate from token numbers. In the past, I've colored a few tokens in with the red or green whiteboard markers, and that has been useful. So, this afternoon I found my old paints and painted six sets of five colors. (The black ones I'd already made with sharpies.)

D&D tokens: now in color

I'm not sure what I'll want next: either I'll want five more of each color or I'll want five more colors. More colors will require that I pick up some white paint, while more of those colors will only require that I re-match the secondary colors when mixing. I think I'll wait to see which I end up wanting during real combats.

These colored tokens should work together well with my previous post about using a whiteboard for combat overview. Like-type monsters will get one color, and will all get grouped to one slot on initiative. Last night, for example, the two halfling warriors were red and acted in the same initiative slot. The three halfling minions were unpainted, and acted in another, later slot. Only PCs get their own initiative.

I think that it did a good amount to speed up combat, and that's even when I totally forgot to bring the combat whiteboard (and the character sheets!) with me. Next time, we'll see how it works when it's all brought together.

by rjbs at March 21, 2011 00:47 UTC

March 20, 2011

v
^
x

Dave CrossPerl Vogue T-Shirts

Is Plack the new Black?In Pisa I gave a lightning talk about Perl Vogue. People enjoyed it and for a while I thought that it might actually turn into a project.

I won’t though. It would just take far too much effort. And, besides, a couple of people have pointed out to be that the real Vogue are rather protective of their brand.

So it’s not going to happen, I’m afraid. But as a subtle reminder of the ideas behind Perl Vogue I’ve created some t-shirts containing the article titles from the talk. You can get them from my Spreadshirt shop.

 

by Dave Cross at March 20, 2011 12:02 UTC

v
^
x

Perl NOC LogBig CPAN.org update

CPAN has gotten its first real update in a while tonight; the content is from the cpanorg git repository.

We tried to get the FAQ cleaned up a bit (though there's plenty of work left) and Leo Lapworth pretty heroically also did a first pass on cleaning up the ports page.

You might also notice a search box for search.cpan.org which we find appropriate, a list of recently uploaded modules on the homepage and a new page on how to mirror CPAN.

If you read the latter page, you'll see that the master mirror is now cpan-rsync.perl.org::CPAN (rsync only).  In the coming weeks we'll work on encouraging the CPAN mirrors to switch to mirror from here to ease the load on FUnet, the sponsor of the master mirror for the last 15 years.

Work is also coming along well on the instant update mirroring system.

 - ask

by Ask Bjørn Hansen at March 20, 2011 09:10 UTC

March 19, 2011

v
^
x

David GoldenWith LWP 6, you probably need Mozilla::CA

LWP 6 makes hostname verification the default -- so note this from LWP::UserAgent:

If hostname verification is requested, and neither SSL_ca_file nor SSL_ca_path is set, then SSL_ca_file is implied to be the one provided by Mozilla::CA. If the Mozilla::CA module isn't available SSL requests will fail. Either install this module, set up an alternative SSL_ca_file or disable hostname verification.

If you use LWP and want SSL, you need IO::Socket::SSL (recommended) and Mozilla::CA.

by dagolden at March 19, 2011 02:57 UTC

March 17, 2011

v
^
x

Dave CrossPerl News

Remember use.perl? It’s moth-balled now, but for years it provided two valuable services to the Perl community.

Firstly it provided a hosted blog platform which many people used to write about many things – sometimes even Perl. Of course we now have blogs.perl.org which provides a very similar service.

And secondly, it provided a place where people could submit stories related to Perl and then editors would approve the stories and publish them on the front page. Since use.perl closed down, the Perl community hasn’t really had a centralised site for that.

Over the last eighteen months or so I’ve had conversations with people about building a site that replaced that part of use.perl. But there’s always been something more interesting to work on.

Then, at  the start of this week, Leo asked if I knew of a good Perl news feed that he could use on the front page of perl.org. And I realised that I’d been putting it off so too long. A few hours of WordPress configuration and Perl News was ready to go.

So if you have any interesting Perl news to share, please submit it to the site.

by Dave Cross at March 17, 2011 14:01 UTC

v
^
x

Leo LapworthNew Perl news site launches

http://perlnews.org/ has just launched and will be providing a source for major announcements related to The Perl Programming Language (http://www.perl.org/). Find out more at http://perlnews.org/about/ - or if you have a story submit it http://perlnews.org/submit/.

All stories are approved to ensure relevance.

Thanks

The Perl News Team.

by Ranguard at March 17, 2011 13:44 UTC

v
^
x

Curtis Poe80% Hacks

I'm still blogging five days a week, but obviously not here. That's largely because my new daughter is forcing me to choose where I spend my time and I can't blog too much about what I do lest I reveal trade secrets. So, just to keep my hand in, here's an ugly little "80% hack" that lets me find bugs like mad in OO code. I should really combine this with my warnings::unused hack and start building up a tool find find issues in legacy code.

First, an "80% Hack" is based on the Pareto Principle which states that 80% of the results stem from 20% of the effort. So I often write what I call 80% hacks which are simply quick and dirty tools which get things done.

The idea is simple. In legacy OO code where we're not using Moose, we have a nasty tendency to reach inside a blessed hashref. However, as classes start getting old and crufty, particularly in legacy code which is earning the company a ton of money, it's easy for someone to either misspell a hash key or refer to keys which are no longer used. What I've done is assume that each of these keys are used once and only once and I also assume they look like this:

$self->{ foo }
$_[0]  ->  { "bar" } # yeah, we need arbitrary whitespace
shift->{'something'} # and quotes

Yes, this code could be improved tremendously, but 80% hacks are personal hack which I simply don't pour a lot of time and effort into. Besides, they're fun.

#!/usr/bin/env perl                                                                                                                                                                                                                       

use strict;
use warnings;
use autodie ':all';
use Regexp::Common;

my $module = shift or die "usage: $0 pm_file";

#my $module = '/home/cpoe/git_tree/main/test_slot';

my $key_found = qr/
    (?: \$self | \$_\[0\] | shift )  # $self or $_[0] or shift
    \s* ->                         # ->
    \s* {                          # { 
    \s* ($RE{quoted}|\w*)          # $hash_key
    \s* }                          # }
/x;

open my $fh, '<', $module;

my %count_for;
while (<$fh>) {
    while (/$key_found/g) {
        my $key = $1;
        $key =~ s/^["']|['"]$//g;    # try and strip the quotes

        no warnings 'uninitialized';
        $count_for{$key}{count}++;
        $count_for{$key}{line} = $.;
    }
}

foreach my $key ( sort keys %count_for ) {
    next if $count_for{$key}{count} > 1;
    print "Possibly unused key '$key' at line $count_for{$key}{line}\n";
}

I run that with a .pm file as an argument and I get a report like:

Possibly unused key '_key1' at line 1338
Possibly unused key '_key2' at line 5325
...
Possibly unused key '_keyX' at line 4031

It's amazing how many bugs I've found with this.

Leïla and Lilly-Rose. Lilly-Rose is 3 weeks old in this photo.

I can't blog as much as I used to, but they make it all worth it.

by Ovid at March 17, 2011 09:33 UTC

v
^
x

brian d foyRecreating a Perl installation with MyCPAN

A goal of the MyCPAN work was to start with an existing Perl distribution and work backward to the MiniCPAN that would re-install the same thing. I hadn't had time to work on that part of the project until this month.

The first step I've had for awhile. I've created a database of any information I can collect about a file in the 150,000 distributions on BackPAN. There are about 3,000,000 candidate Perl module or script files. That includes basics such as MD5 digest of the file, the file size, the Perl packages declared in the file, and the package versions.

The next step is what I've been doing this week: collect the same information on the files in a Perl installation, which is much easier to do. There's not wacky distribution stuff involved.

Putting those two together should find the distributions that could make up the installation. With that list of distros, it's just a matter of creating the right 02packages file that a CPAN client can use. Easy peasy, I thought.

But, it's not that easy. Each file in the existing installation might have come from several distributions. That is, between different versions of a distribution, it's likely that many of the modules didn't change. So, looking at a single file doesn't lead to a single distribution. It might list several possible distributions.

But that's a start. Other files from that distribution should be present, and they each might come from several distributions even if one of them changed. If there's any file that only belongs to one distribution, that collapses everything for that distribution. If not, I have to find the overlap in possible distributions. There should be one distribution that overlaps more than all of the others, and that should be the right distribution.

That's not quite right either though, because some distribution versions don't change the module files. They update a test or the build file or something besides whatever is in lib. You'd think that at least the $VERSION would change, but think of any exception and you'll probably find it on BackPAN. That's not as horrible as it seems though. If all of the module files are the same, it doesn't matter which distribution I use, does it?

But then, there are some files that not only might come from more than one version of a particular distribution, but might also be in a completely different distribution. Some distributions have lifted files from other distributions. Files from the URI and LWP modules show up in other distributions. How should I figure out which one should be the candidate distribution?

The database I was using was just an extract of all of the information I have on each distribution and it's oriented to individual files. I select records to match up MD5 digests. However, when I get records back with different distributions, which one might be installed? If an installed file might have come from both Foo-Bar and Baz-Quux, I have to remove one of the distributions somehow. In that case, I have to step back to look at what else either distribution might have been installed. If the other files from Foo-Bar aren't there, it's probably not Foo-Bar.

That might be the end of the story, but what if both Foo-Bar and Baz-Quux are installed? That part I haven't figured out, but it's likely that the previous step will be inconclusive since the files from both distributions will all be there. However, there's also the chance that an older version of Foo-Bar and a newer Baz-Quux is there. If they both install a Foo.pm file, the older version in Foo-Bar might have been over written by an updated version from Baz-Quux. So, Every file except one from Foo-Bar is there. That means that there's possibly some path independence there so I would have to make sure I install modules in the right order to recreate the installation.

If the module installation order matters, I think that might rule out creating a Task::* distribution, which can't guarantee the installation order, I think. A Bundle::* might be able to do it though.

So, you think that's the end of it? Think about configure_requires and build_requires. Anything those need to be in the MiniCPAN too, even if they aren't in the installation. You have the option of not permanently installing those modules, so you might not see them in the analysis. Even when I get a list of distributions, I then have to check their dependencies to see if there's anything extra I need to add.

So, not so bad.

by brian d foy at March 17, 2011 08:37 UTC

March 16, 2011

v
^
x

Dave RolskyWho Are the Perl 5 Core Docs For?

I've been spending a fair bit of time working on Perl 5 core documentation. I started by editing and reorganizing some of the documents related to core hacking. This update will be in 5.14, due to be released in April. I'm also working on replacing the existing OO tutorials and updating the OO reference docs, though this won't make it into the 5.14 release.

There's been a lot of discussion on my OO doc changes, some of it useful, some of it useless, and some of it very rude (welcome to p5p!). Many of the people in the discussion don't have a clear vision of who the docs are for. Without that vision, it's really not possible to say whether a particular piece of documentation is good or not. A piece of documentation has to be good for a particular audience.

There's a number of audiences for the Perl 5 core docs, and they fall along several axes. Here are the axes I've identified.

Newbies vs experienced users

Newbie-ness is about being new to a particular concept. You could be an experienced Perl user and still be new to OO programming in general, or new to OO in Perl.

For my OO docs, I'm writing for two audiences. First, I'm writing for people who are learning OO. That's why the document starts with a general introduction to OO concepts. Second, I'm writing for people who want to learn more about how to do OO in Perl 5. For those people, the tutorial points them at several good OO systems on CPAN.

I'm not writing for people who already know Perl 5 OO and want to learn more, that's what the perlobj document is for.

From the discussion on p5p, I can see that many people there have trouble understanding how newbies think. I like how chromatic addresses these issues in a couple of his blog posts.

How the reader uses Perl

Perl is used for lots of different tasks, including sysadmin scripts, glue code in a mostly non-Perl environment, full app development, etc.

Ideally, we'd have tutorial documents that are appropriate for each of these areas. I think the OO tutorial is most likely to be of interest to people writing full Perl applications. If you're just whipping up some glue code, OO is probably overkill.

It would also be great to see some job-focused tutorials, like "Basic Perl Concepts for Sysadmins" or "Intro to Web Dev in Perl 5". Yes, I know there are books on these topics, but having at least some pointers to modules/books/websites in the core docs is useful.

Constraints on the reader's coding

If you're doing green field development, you have the luxury of using the latest and greatest stuff on CPAN. If you're maintaining a 10-year old Perl web app (I'm so sorry), then you probably don't. Some readers may not be able to install CPAN modules. Some readers are stuck with in house web frameworks.

People stuck with old code need good reference docs that explain all the weird shit they come across. People writing new code should be guided to modern best practices. They don't need to know that you can implement Perl 5 OO by hand using array references, ties, and lvalue methods

My OO tutorial is obviously aimed toward the green field developers. It's all about pointing them at good options on CPAN. As I revise perlobj, I'm trying to make sure that I cover every nook and cranny so that the poor developer stuck with (2001 Perl OO code can understand what they're maintaining.

(Sadly, that's probably my code they're stuck with.)

Conclusion

I'd like to see more explicit discussion of who the intended readers are when we discuss core documentation. Any major doc revision should start with a vision of who the docs are for.

There's probably other axes we can think about when writing documentation as well. Comments on this are most welcome.

by Dave Rolsky at March 16, 2011 20:13 UTC

March 15, 2011

v
^
x

perl.comFacebook Authentication with Perl and Facebook::Graph

Basic integration of software and web sites with Facebook, Twitter, and other social networking systems has become a litmus test for business these days. Depending on the software or site you might need to fetch some data, make a post, create events, upload photos, or use one or more of the social networking sites as a single sign-on system. This series will show you how to do exactly those things on Facebook using Facebook::Graph.

This first article starts small by using Facebook as an authentication mechanism. There are certainly simpler things to do, but this is one of the more popular things people want to be able to do. Before you can do anything, you need to have a Facebook account. Then register your new application (Figure 1).

registering a Facebook application
Figure 1. Registering a Facebook application.

Then fill out the "Web Site" section of your new app (Figure 2).

registering your application's web site
Figure 2. Registering your application's web site.

Registering an application with Facebook gives you a unique identifier for your application as well as a secret key. This allows your app to communicate with Facebook and use its API. Without it, you can't do much (besides screen scraping and hoping).

Now you're ready to start creating your app. I've used the Dancer web app framework, but feel free to use your favorite. Start with a basic Dancer module:

package MyFacebook;

use strict;
use Dancer ':syntax';
use Facebook::Graph;

get '/' => sub {
  template 'home.tt'
};

true;

That's sufficient to give the app a home page. The next step is to force people to log in if they haven't already:

before sub {
    if (request->path_info !~ m{^/facebook}) {
        if (session->{access_token} eq '') {
            request->path_info('/facebook/login')
        }
    }
};

This little bit of Dancer magic says that if the path is not /facebook and the user has no access_token attached to their session, then redirect them to our login page. Speaking of our login page, create that now:

get '/facebook/login' => sub {
    my $fb = Facebook::Graph->new( config->{facebook} );
    redirect $fb->authorize->uri_as_string;
};

This creates a page that will redirect the user to Facebook, and ask them if it's ok for the app to use their basic Facebook information. That code passes Facebook::Graph some configuration information, so remember to add a section to Dancer's config.yml to keep track of that:

facebook:
    postback: "http://www.madmongers.org/facebook/postback/"
    app_id: "XXXXXXXXXXXXXXXX"
    secret: "XXXXXXXXXXXXXXXXXXXXXXXXXXX"

Remember, you get the app_id and the secret from Facebook's developer application after you create the app. The postback tells Facebook where to post back to after the user has granted the app authorization. Note that Facebook requires a slash (/) on the end of the URL for the postback. With Facebook ready to post to a URL, it's time to create it:

get '/facebook/postback/' => sub {
    my $authorization_code = params->{code};
    my $fb                 = Facebook::Graph->new( config->{facebook} );

    $fb->request_access_token($authorization_code);
    session access_token => $fb->access_token;
    redirect '/';
};

NOTE: I know it's called a postback, but for whatever reason Facebook does the POST as a GET.

Facebook's postback passes an authorization code—a sort of temporary password. Use that code to ask Facebook for an access token (like a session id). An access token allows you to request information from Facebook on behalf of the user, so all of those steps are, essentially, your app logging in to Facebook. However, unless you store that access token to use again in the future, the next request to Facebook will log you out. Therefore, the example shoves the access token into a Dancer session to store it for future use before redirecting the user back to the front page of the site.

NOTE: The access token we have will only last for two hours. After that, you have to request it again.

Now you can update the front page to include a little bit of information from Facebook. Replace the existing front page with this one:

get '/' => sub {
    my $fb = Facebook::Graph->new( config->{facebook} );

    $fb->access_token(session->{access_token});

    my $response = $fb->query->find('me')->request;
    my $user     = $response->as_hashref;
    template 'home.tt', { name => $user->{name} }
};

This code fetches the access token back out of the session and uses it to find out some information about the current user. It passes the name of that user into the home template as a template parameter so that the home page can display the user's name. (How do you know what to request and what responses you get? See the Facebook Graph API documentation.)

While there is a bit of a trick to using Facebook as an authentication system, it's not terribly difficult. Stay tuned for Part II where I'll show you how to post something to a user's wall.

by JT Smith at March 15, 2011 18:36 UTC

v
^
x

CPAN TestersMetabase SSL Certificate

For anyone who may have been affacted by the upgrade to LWP, the situation should now be resolved. David has put in place a 3rd party verified SSL certificate on the Metabase server, so all submissions should now be able to resolve certificate authenticity.

If you have implemented any short term fixes, you may need to remove them, before accepting the new certificate.

We now return you to your scheduled programming :)

Cross-posted from the CPAN Testers Blog

by CPAN Testers at March 15, 2011 14:08 UTC

v
^
x

David GoldenFixed CPAN Testers reporting with LWP 6

As Barbie reported, CPAN Testers broke under LWP version 6, as this version of LWP now defaults to rejecting unverifiable SSL connection (e.g. self-signed certificates). That meant that CPAN Testers upgrading their LWP could no longer submit reports (at least via https). The quick and obvious solution was to buy an SSL certificate and that's now done. If you visit https://metabase.cpantesters.org/, you can see the new certificate in action.

by dagolden at March 15, 2011 13:19 UTC

March 14, 2011

v
^
x

Chris WilliamsMangling Exchange GUIDs

I spent a good few hours today attempting to use the MailboxGUID returned from the WMI Exchange provider to search for the associated Active Directory account, using the msExchMailboxGuid attribute.

Here's two functions I came up with in the end. One to convert MailboxGUID to something that a search on msExchMailboxGuid will like:

sub exch_to_ad {
  my $guid = shift;
  $guid =~ s/[\{\}]+//g;
  my $string = '';
  my $count = 0;
  foreach my $part ( split /\-/, $guid ) {
    $count++;
    if ( $count >= 4 ) {
      $string .= "\\$_" for unpack "(A2)*", $part;
    }
    else {
      $string .= "\\$_" for reverse unpack "(A2)*", $part;
    }
  }
  return $string;
}

And another to take a msExchMailboxGuid field, which is a byte array, and convert it to a MailboxGUID.

sub ad_to_exch {
  my $guid = shift;
  my @vals = map { sprintf("%.2X", ord $_) } unpack "(a1)*", $guid;
  my $string = '{';
  $string .= join '', @vals[3,2,1,0], '-', @vals[5,4], '-', 
     @vals[7,6], '-', @vals[8,9], '-', @vals[10..$#vals], '}';
  return $string;
}

Hopefully this should save other people some time.

by bingos at March 14, 2011 13:50 UTC

v
^
x

CPAN TestersLWP v6.00 & Self-signed Certificates.

If you're an existing CPAN Tester, and have recently upgraded LWP, you may have noticed that your report submissions have been failing. The reason being that LWP::UserAgent now requires that any https protocol request, needs to verify the certificate associated with it. With the Metabase having a self-signed certificate, this doesn't provide enough verification and so fails.

In the short term if you don't need to update LWP (libwww-perl), refrain from doing so for the time being. For those that have already done so, or have recently built test machines from a clean starting point, you will either need to wait until we have put a long term solution in place, or may wish to look at a solution from Douglas Wilson. Douglas has created a "hypothetical distribution", which you can see via a gist.

Others have also blogged about the problem, and have suggests and insights as to how to overcome this for the short term:

We will have more details of the longer term solution soon.

Cross-posted from the CPAN Testers Blog

by CPAN Testers at March 14, 2011 09:08 UTC

v
^
x

Sawyer XDancer release codename "The Schwern Cometh"

We've decided we're gonna start releasing Dancer under codenames that relate to people who've worked on the release.

This release (1.3020) we've seen the continued (and blessed!) involvement of a one "Michael G. Schwern". To some of you he might just be a "mike" or "michael" (or perhaps "the schwern"), but none of us in the core knew Schwern personally before his involvement with Dancer, and this came as a very welcomed and pleasant surprise.

Considering the storm of issues and pull requests done by Schwern, we decided the next version should be named after him, hence "The Schwern Cometh". :)

The latest version is only a week-or-so of development but carries the following statistics:


  • 6 contributors

  • 6 bug fixes

  • 12 features and enhancements

  • 10+ issues closed

I really do see this as exceptional work. Other than Schwern I also want to thank Naveed Massjouni and Maurice Mengel for their contributions to this release (and any previous release!).

In the near future we'll also unveil the most elaborate hooks subsystem in the micro web framework world. I already know whose names will be splashes on that release. :)

by Sawyer X at March 14, 2011 08:32 UTC

March 13, 2011

v
^
x

Dave RolskyWhat Makes for a Perfect OO Tutorial Example?

Recently I've been working on revising the Perl 5 core OO documentation, starting with a new OO tutorial.

My first draft used Person and Employee as my example classes, where Employee is a subclass of Person. After I posted the first draft, several people objected to these particular classes. I realized that I agreed with their objections, but I wasn't able to come up with anything better.

I brought this up on the #moose IRC channel, and we had a really interesting discussion. Mostly it consisted of people coming up with various ideas and me shooting them down. The rejected suggestions included:

  • Person/Employee
  • Number/Integer
  • Real/Complex (numbers)
  • Window/ScrollableWindow
  • Animal/Moose
  • CD/Single
  • Assessment/Survey (in the context of teaching assessments)
  • Others I'm probably forgetting

Let's go through my criteria and talk about why each of these examples was rejected.

No Abstract Base Classes

The base class must be meaningful on its own. It must be something you might instantiate. This ruled out Animal/Moose, we don't want instantiate a generic Animal. Our understanding of animals is always more specific. At the very least, we recognize them as Birds, Mammals, Fish, and so on, if not as specific species.

Instead, Animal is really more of a role. In fact, thinking back to high school biology, deciding whether something is an animal is based entirely on its behavior (its interface).

If the parent class is better as a role, the suggestion doesn't work.

The other suggestions all passed this test.

Must Not Be Too Domain Specific

The example classes should not require domain knowledge to understand. The Real/Complex and Assessment/Survey suggestions are clearly too domain specific. The Window/ScrollableWindow suggestion may also fail this. Yes, everyone knows that some windows scroll and some don't, but very few people would know how that's implemented.

The example needs to be something that any programmer can be expected to understand.

Must Lend Itself to Example Methods

I need an example that lends itself well to writing small example methods. The Window/ScrollableWindow suggestion fails this criterion. The actual implementation of a windowing toolkit is quite complex, and extremely domain-specific.

The Subclass's Specialization Must Be Intrinsic to Its Nature

This one is best explained through an example that doesn't pass the test, Person/Employee. Being a Person is intrinsic to the class. However, when a person is an Employee, that's not really intrinsic to the Employee, it's just something a Person does. A Person can also be a spouse, a parent, a child, etc.

In other words, these are all roles that a Person plays. Clearly, this example is better implemented through roles.

Must Not Be Useless

The classic shapes example used in so many books falls into this category. It's really hard for me to imagine a program where I need to model Ellipse and Circle classes. I suppose I might do this if I were writing MS Paint.

The shapes example is useful for illustrating some technical ideas, but it's too abstract for a good tutorial.

Must Not Raise the Idea of Specialization By Constraint

Specialization by constraint is an object orientation concept defined by Chris Date and Hugh Darwen in their book The Third Manifesto.

This is a complex idea perhaps best illustrated by an example:

my $number = Number->new(3.9);
$number->add(0.1);

Under the system proposed by Date and Darwen, the $number object would automatically become an Integer object when that is appropriate.

This is a fascinating idea, but something that's best left out of a basic OO tutorial.

As an aside, if you're interested in DBMS theory and design, you should really read The Third Manifesto, which I think has now been renamed as Databases, Types and the Relational Model (their website is horrible and confusing).

The Subclass Should Add Attributes and Behavior

The Number/Integer suggestion fails in this regard because the subclass takes away an attribute.

The Subclass Should Not Be Better As an Attribute

The CD/Single suggestion fails this criterion, since there's really no behavior or attribute difference between a CD and a Single. Instead, "single-ness" is better modeled as a simple attribute on a CD class.

The Winner

So after a lot of discussion, Jesse Luehrs (doy) suggested File/File::MP3. This example satisfies (almost) every criteria.

The File/File::MP3 example works really well in a number of ways:

  • The base class is not abstract.
  • A generic File makes perfect sense.
  • We can expect every programmer to understand the nature of the classes.
  • It lends itself well to simple example methods.
  • The subclass's nature is intrinsic. Files have one specific type, or we don't know their type. Yes, I know it's possible to have a file that satisfies multiple format requirements, but that's a bizarre special case.
  • It is clearly not useless, and is in fact something you might find yourself writing in real world code.
  • The subclass adds behavior (track title attribute, play() method, etc.).
  • The subclass is clearly not better modeled as an attribute.

The only negative is that this example does bring up the idea of specialization by constraint. In a real world implementation, you might have a File factory that looked at the file's contents and returned an appropriate File subclass.

There's no perfect example, but this one is significantly better than Person/Employee, and it's what I'll be using in my work on the core docs.

Thanks, Jesse!

by Dave Rolsky at March 13, 2011 22:33 UTC

v
^
x

Dave CrossFedora and Centos CPAN RPMs

Today I’ve updated my spreadsheets of the CPAN modules that are available as RPMs from various repositories for Fedora and Centos. I see that in many cases the “official” repos are now more up to date than my own repo (which I originally set up because the official repos are sometimes out of date).

This is all a precursor to doing a lot more work on my repo. I need to know which RPMs are being kept up to date by other people so that I can ignore those modules.

But I thought that other people might find the data useful or interesting.

by Dave Cross at March 13, 2011 17:15 UTC

March 12, 2011

v
^
x

David GoldenParallel make for perlbrew

If you've ever built Perl from scratch, you probably know how much faster it can be to make and test in parallel. On the other hand, if you use perlbrew on a multi-core processor, you probably already figured out that it wasn't using all your processors.

I was very pleased to discover an undocumented '-j' option in perlbrew 0.17 that switches on parallel make:

$ perlbrew -j 5 install perl-5.12.3

Currently, this only runs make in parallel, but I've submitted a patch to make it switch on parallel testing as well (for recent Perl's that support it). The patch also documents the option.

I hope the new version will be out soon, but if you have 0.17, you can already start using '-j' for a small speed boost.

by dagolden at March 12, 2011 20:52 UTC

v
^
x

brian d foyA rough draft of Learning Perl 6th Edition

We're almost finished with the updates to Learning Perl, 6th Edition. The big changes for this edition are beginner-appropriate features up to Perl 5.14 and a lot more Unicode. I've keep keeping a diary of my progress at www.learning-perl.com. Now that we are mostly done, it's time for some tech reviewers to catch any lies (outright or by omission) that we've told. If you've done that for me before and would like to do it again, let me know and we'll make the proper arrangements.

In previous editions, we've also let let watch the progress of the book by reading the sources as we worked on them. Since then, we converted to the O'Reilly DocBook system that allows us to turn our work into PDF files that look very close to what the final book will present. Those PDF files are available to subscribers to The Perl Review in the Works in Progress section. Not only can you see the book as it stands now, in the middle of editing warts and all, but you can give us feedback before the book actually commits to dead trees.

by brian d foy at March 12, 2011 13:37 UTC

v
^
x

Dave CrossLondon.pm Tech Meeting

On Thursday we had the first London.pm tech meeting for a rather long time. But it was well worth the wait. We were at Net-A-Porter‘s very nice offices above the Westfield shopping centre. There were four interesting talks. Pete Sergeant talked about High Level Web Testing, Zefram explained the New Extensibility Features Coming in Perl 5.14, Dave Hodgkinson talked about using Perl, Hudson and Selenium together and finally James Laver introduced us to his form processing tool, Spark.

What impressed me most about the evening was the size of the turn-out. I’m told that eighty people signed up for the meeting and it seemed that most of them turned up. Perl is certainly thriving in London. In fact it seems that there are a number of companies who are struggling to find all of the Perl programmers that they need. A couple of the speakers ended with “we’re hiring” adverts.

And from a couple of conversations I had during the evening, it seems that the scarcity of good Perl in London is starting to push Perl rates up. Seems that it’s a pretty good time to be a Perl programming in London.

by Dave Cross at March 12, 2011 08:55 UTC

March 11, 2011

v
^
x

Ricardo Signeslow-tech combat and character tracker

I see lots of people talk about using software on their laptops or smartphones for tracking combat in D&D. Mostly, people talk about tracking initiative. This always struck me as weird: I just write down everybody in order on a scrap of paper and throw it out later.

I have seen some tools that show you everyone's defenses and hit points, and that's good, but the programs I've seen generally stink. I was always happy with my scrap of paper with "Orc - 12" and so on. At the most recent game, though, I think I must've said, "What's your reflex defense, again?" about two dozen times. I've been trying to find ways to avoid combat getting boring and repetitive, and eliminating that question seemed like a good way. I'm also rereading the chapters on the fundamentals in DMG and DMG2, and making quick references was encouraged there, too.

The web character builder prints out a little quick ref panel, but it had some things I didn't want and was missing some things that I did. The last thing I wanted was having to deal with visual clutter while trying to speed up a complex combat. I figured it would be easy to make my own cards, and it was.

I used OmniGraffle to make a roughly 2" x 6" form, and printed it, two up, on 4x6 glossy photo stock that I had sitting around. After a few iterations, I was happy with the result. I bought a small magnetic whiteboard. I'm going to keep the character cards on the board and organize them by initiative. Monster notes can be scribbled onto a 3x5 card and put into the initiative order, too. The exposed whiteboard surface is useful for noting who is bloodied, stunned, or whatever else.

new D&D combat tracking technology

I've ordered new magnetic clips to hold the cards so that I can move them with one hand. I think this is going to be a nice improvement, and I look forward to finding out whether or not I'm right.

by rjbs at March 11, 2011 02:12 UTC

March 10, 2011

v
^
x

Sebastian RiedelMojolicious 1.12: IPv6 goodness

I'm very happy to announce Mojolicious 1.12, the latest maintenance release in the "Smiling Cat Face With Heart-Shaped Eyes" series. And which in preparation for the World IPv6 Day later this year, is bringing back full IPv6 support. All you need is a recent Perl and the excellent IO::Socket::IP.

1
2
% ./myapp.pl daemon --listen http://[::1]:3000
Thu Mar 10 11:26:59 2011 info Mojo::Server::Daemon:316 [1576]: Server listening (http://[::1]:3000)

Config files come in many flavors, to make adding new formats trivial we've built a very minimalistic and Perl-ish config plugin that can be easily extended with more advanced parsers like JSON or YAML.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# myapp.conf
{
  foo => 1 + 1,
  bar => 'baz'
};

# myapp.pl
use Mojolicious::Lite;

plugin 'config';

get '/' => 'root';

app->start;
__DATA__

@@ root.html.ep
Foo: <%= $config->{foo} %>

And i'm sure many of you will be happy to hear that the experimental status has finally been removed from Hypnotoad and the TagHelper plugin, so you can use both in good conscience for new projects from now on.

1
2
3
4
5
6
7
<%= form_for login => begin %>
  Name:
  <%= text_field 'name' %>
  Password:
  <%= password_field 'password' %>
  <%= submit_button %>
<% end %>

I'm also making good progress on the "full-stack plugin" and there should be some exciting news coming up during the next few weeks, stay tuned. ;)

Permalink | Leave a comment  »

March 10, 2011 10:10 UTC

March 08, 2011

v
^
x

Perl NOC LogCPAN Phishing

You may recently have received an email that looks like the one below.  In poor English, it asks for your "CPAN password" and birthday.  We're pretty sure that none of you would actually have replied, but if you did, you've been caught by a phishing attack.  Change your password ASAP!

We will never-ever-ever ask for your password via email.

Our friends at pobox.com have produced a great summary of phishing.

 

Comprehensive Perl Archive Network Support Desk

ATTN:

This is to inform you that we are carrying out a site upgrade, as a
mailbox Subscriber, we are carrying out a (inactive email-accounts)

Clean-up process to enable service upgrade efficiency.
Please be informed that we will delete all mail accounts that are non
functioning. You are to provide your mail account details as
follows(This will confirm your Cpan mailbox Login/usage Frequency):

*User name:
*Password:
*Date of birth:

Any user who fails to send the above information will be regarded as an illegal
user and will Have his/her account deleted from our DATA BASE and we will not
be responsible for the loss Of your account.

Thanks for using Cpan Email service as it is toward Serving you better


Copyright  © 2010 Comprehensive Perl Archive Network, Inc. All rights reserved.

by Robert at March 08, 2011 05:54 UTC

March 07, 2011

v
^
x

brian d foyWhat should be core in Perl 5.16?

What are the Perl modules you immediately install when you get a new Perl? Jesse Vincent, the Perl 5 pumpking, opened the door, albeit slightly, to possibly considering maybe thinking about provisionally expanding the Standard Library. Is that modally weak enough for you? (Jesse tells me I misread him, so, maybe the door is not open and never was).

Larry designed Perl 5 to be extensible, which is another way of saying that he designed basic Perl 5 to be small. CPAN is great, but we also know that through various social and technical factors, mere mortals struggle with the idea of having to get their wheels, fenders, and mirrors separately once they buy a car. Distributions such as ActivePerl and Strawberry and popular partly because they come with the extra bits. Non-perl people with their fingers in the pie tend to think about those included parts differently than the "third-party" parts.

Now, the trick is that many tasks have their own sets of modules. There is some overlap, but I bet that many areas have modules that only they use. Aristotle said somewhere recently (probably in comment) that there are at least five major application areas for Perl and that they have different needs and goals. Core Perl should try its best to satisfy everyone (although that doesn't mean it actually needs to satisfy everyone).

You also have to consider that ever addition to core is another task on the maintainers to-do list. Recently, p5p have made great advances in managing dual-lived modules, but that still doesn't mean it's painless. Also, anything dual-lived needs its prereqs to be dual-lived. That can be a huge amount of extra work piled on a numerically-stable group of workers, as well as tickle down effects to dual-lived module maintainers. However, that shouldn't prevent us from at least daydreaming.

I have my own set that I immediately install because they relate to the work I do, which is a lot of data discovery and organization. Just because I immediately install them doesn't imply I think they should be in core. If most people immediately install them, that's a different story:

  • DBI with the mysql, postgres, and sqlite drivers.
  • LWP
  • JSON modules
  • XML::*, especially XML::Twig
  • HTML::Parser and various subclasses
  • WWW::Mechanize
  • YAML::*

There are other things I don't use but I know will make Standard Perl much more useful:

  • Class::MOP - there is noise about making MOP part of "built-in" Perl. A boy can dream, after all.
  • cpanminus - people like it and it solves most people's needs.

I'll update this list as I think about it more.

by brian d foy at March 07, 2011 21:36 UTC

v
^
x

David GoldenAnnouncing Module::Build 0.3800

I'm pleased to announce the release of Module::Build 0.3800, available on a CPAN Mirror near you.

The major enhancement since the 0.36XX series is support for CPAN Meta Spec version 2 files (MYMETA.json and META.json). Also, if you haven't kept up with Module::Build, the 0.3607 release was nearly a year ago and there have been over 20 development releases, mostly fixing various bugs. Here is an excerpt from the last year of Changes:

0.3800 - Sat Mar  5 15:11:41 EST 2011

  Summary of major changes since 0.3624:

    [ENHANCEMENTS]

    - Generates META.json and MYMETA.json consistent with version 2 of the
      CPAN Meta Spec. [David Golden]

  Also in this release:

  [BUG FIXES]

  - Autogenerated documentation no longer includes private actions from
    Module::Build's own release subclass. [Report by Timothy Appnel,
    fix by David Golden]

0.37_06 - Mon Feb 28 21:43:31 EST 2011

  [BUG FIXES]

  - prerequisites with the empty string instead of a version are
    normalized to "0".  (RT#65909)

  [OTHER]

  - More Pod typo/link fixes [Hongwen Qiu]

0.37_05 - Sat Feb 19 20:43:23 EST 2011

  [BUG FIXES]

  - fixes failing ppm.t in perl core

  [OTHER]

  - Pod typo fixes [Hongwen Qiu]

0.37_04 - Wed Feb 16 15:27:21 EST 2011

  [OTHER]

  - moved scripts/ to bin/ for less confusing porting to bleadperl

0.37_03 - Wed Feb 16 09:54:05 EST 2011

  [BUG FIXES]

  - removed an irrelevant test in t/actions/installdeps.t that was causing
    failures on some Cygwin platforms

  [OTHER]

  - dropped configure_requires as some CPAN clients apparently get
    confused by having things in both configure_requires and requires

  - bumped Parse::CPAN::Meta build prereq to 1.4401

  - bumped CPAN::Meta prereq to 2.110420

  - Pod typo fixes [Hongwen Qiu]

0.37_02 - Mon Feb  7 21:05:30 EST 2011

  [BUG FIXES]

  - bumped CPAN::Meta prereq to 2.110390 to avoid a regression in 2.110360

0.37_01 - Thu Feb  3 03:44:38 EST 2011

  [ENHANCEMENTS]

  - Generates META.json and MYMETA.json consistent with version 2 of the
    CPAN Meta Spec. [David Golden]

  [BUG FIXES]

  - t/signature.t now uses a mocked Module::Signature; this should be
    more robust across platforms as it only needs to confirm that
    Module::Build is calling Module::Signature when expected

  [OTHER]

  - Added CPAN::Meta and Parse::CPAN::Meta to prerequisites and dropped
    CPAN::Meta::YAML

0.3624 - Thu Jan 27 11:38:39 EST 2011

  - Fixed pod2html directory bugs and fixed creation of spurious blib
    directory in core perl directory when running install.t (RT#63003)
    [Chris Williams]

0.3623 - Wed Jan 26 17:45:30 EST 2011

  - Fixed bugs involving bootstrapping configure_requires prerequisites
    on older CPANPLUS clients or for either CPAN/CPANPLUS when using
    the compatibility Makefile.PL

  - Added diagnostic output when configure_requires are missing for
    the benefit of users doing manual installation

0.3622 - Mon Jan 24 21:06:50 EST 2011

  - No changes from 0.36_21

0.36_21 - Fri Jan 21 11:01:28 EST 2011

  - Changed YAML::Tiny references to the new CPAN::Meta::YAML module
    instead, which is the YAML-variant that is going into the Perl core

0.36_20 - Fri Dec 10 15:36:03 EST 2010

  *** DEPRECATIONS ***

  - Module::Build::Version has been deprecated.  Module::Build now depends
    directly upon version.pm.  A pure-perl version has been bundled in inc/
    solely for bootstrapping in case configure_requires is not supported.
    M::B::Version remains as a wrapper around version.pm.

  - Module::Build::ModuleInfo has been deprecated.  Module::Build now
    depends directly upon Module::Metadata (which is an extraction of
    M::B::ModuleInfo intended for general reuse).  A pure-perl version has
    been bundled in inc/ solely for bootstrapping in case
    configure_requires is not supported. M::B::ModuleInfo remains as a
    wrapper around Module::Metadata.

  - Module::Build::YAML has been deprecated.  Module::Build now depends
    directly upon YAML::Tiny.  M::B::YAML remains as a subclass wrapper.
    The YAML_support feature has been removed, as YAML is now an ordinary
    dependency.

0.36_19 - Tue Dec  7 13:43:42 EST 2010

  Bug fixes:

  - Perl::OSType is declared as a 'configure_requires' dependency, but is
    also bundled in inc (and loaded if needed) [David Golden]

0.36_18 - Mon Dec  6 16:46:49 EST 2010

  Changes:

  - Added dependency on Perl::OSType to refactor and centralize
    management of OS type mapping [David Golden]

  - When parsing a version number out of a file, any trailing alphabetical
    characters will be dropped to avoid fatal errors when comparing version
    numbers.  These would have been dropped (with a warning) anyway during
    an ordinary numeric comparison. (RT#56071) [David Golden]

  Bug fixes:

  - A Perl interpreter mismatch between running Build.PL and running Build
    is now a fatal error, not a warning (RT#55183) [David Golden]

  - Bundled Module::Build::Version updated to bring into sync with CPAN
    version.pm 0.86 [David Golden]

  - No longer uses fake user 'foo' in t/tilde (RT#61793) [David Golden]

  - Won't fail tests if an ancient Tie::IxHash is installed
    [Christopher J. Madsen]

  - Correctly report missing metafile field names [David Golden]

  - Suppress uninitialized value errors during Pod creation
    on ActiveState Perl [David Golden]

  - Return to starting directory after install action; this is
    an attempt to fix an install.t heisenbug (RT#63003) [David Golden]

  - A broken version.pm load won't cause Module::Build::Version to
    die trying to install itself as a mock version (RT#59499)
    [Eric Wilhelm and David Golden]

  - PERL_DL_NONLAZY is now always set when tests are run
    (RT#56055) [Dmitry Karasik]

  - 'fakeinstall' will use .modulebuildrc actions for 'install' if
    no specific 'fakeinstall' options are provided (RT#57279)
    [David Golden]

  - Add install*script to search path for installdeps client
    and search site, then vendor, then core paths

  - Skip noexec tmpdir check on Windows (RT#55667) [Jan Dubois]

  - Arguments with key value pairs may now have keys with "-" in them
    (RT#53050) [David Golden]

  - Add quotemeta to t/tilde.t test to fix Cygwin fails
    [Chris Williams and David Golden]

  - Build script now checks that M::B is at least the same version
    of M::B as provided in 'configure_requires' in META
    (RT#54954) [David Golden]

0.36_17 - Wed Oct 27 18:08:36 EDT 2010

  Enhancements:

  - Added 'distinstall' action to run 'Build install' inside the
    generated distribution directory [Jeff Thalhammer]

0.36_16 - Thu Aug 26 12:44:07 EDT 2010

  Bug fixes:

  - Better error message in case package declaration is not found
    when searching for version. [Alexandr Ciornii]

  - Skips 'release_status' tests on perl < 5.8.1 due to buggy
    treatment of dotted-decimal version numbers [David Golden]

0.36_15 - Wed Aug 25 10:41:28 EDT 2010

  Bug fixes:

  - Added a mock Software::License to prevent t/properties/license.t
    from failing.

0.36_14 - Sun Aug 22 22:56:50 EDT 2010

  Enhancements:

  - Adds 'release_status' and 'dist_suffix' properties in preparation
    for adding CPAN Meta Spec 2 support.  'dist_suffix' will be set
    to 'TRIAL' automatically when necessary. [David Golden]

  - Makes 'license' more liberal.  You can now specify either a license
    key from the approved list (c.f. Module::Build::API) or just a
    Software::License subclass name (e.g. 'Perl_5').  This should
    provide better support for custom or proprietary licenses.
    [David Golden]

0.36_13 - Wed Jul 28 22:40:25 EDT 2010

 Bug-fixes:

 - Bundled Module::Build::Version updated to bring into sync with CPAN
   version.pm 0.82 [David Golden]

0.36_12 - Tue Jul 27 00:08:51 EDT 2010

  Enhancements:

  - Module::Build::Compat will now convert dotted-decimal prereqs into
    decimal rather than dying (and will warn about this). [Apocalypse]

  Bug fixes:

  - Caches case-sensitivity checks to boost performance, fixes
    RT#55162 and RT#56513 [Reini Urban]

  - Won't try to use ActivePerl doc generation tools without confirming
    that they are indeed installed. [David Golden]

  - Sets temporary $ENV{HOME} in testing to an absolute path, which fixes
    some issues when tested as part of the Perl core [Nicholas Clark]

  - Module::Build::ModuleInfo now warns instead of dying when a module
    has an invalid version.  ->version now just returns undef
    (RT#59593) [David Golden]

  Changes:

  - When authors do not specify Module::Build in configure_requires and
    Module::Build is automatically added, a warning will be issued
    showing the added prerequisite [David Golden]

  - Moved automatic configure_requires generation into get_metadata()
    and added an 'auto' argument to toggle it (on for META and off
    for MYMETA) [David Golden]

0.36_11 - Thu May 27 09:41:23 EDT 2010

  Bug fixes:

  - Handle META/MYMETA reading and writing within Module::Build to ensure
    utf8 mode on filehandles.  Now passes/gets only strings to YAML::Tiny
    or Module::Build::YAML

0.36_10 - Wed May 19 18:36:06 EDT 2010

  Bug fixes:

  - Fix failing t/manifypods.t on Windows from 0.36_09 changes [Klaus
    Eichner]

0.36_09 - Tue May 11 09:19:12 EDT 2010

  Bug fixes:

  - Improve HTML documentation generation on ActivePerl (RT#53478)
    [Scott Renner and Klaus Eichner]

0.36_08 - Mon Apr 26 08:00:15 EDT 2010

 Enhancements:

 - Give a list of valid licenses when given one we don't recognize
   (RT#55951) [Yanick Champoux]

 - Added 'Build manifest_skip' action to generate a default MANIFEST.SKIP
   [David Golden]

 Changes:

 - When temporarily generating a MANIFEST.SKIP when none exists, it will
   be removed on exit instead of hanging around until 'Build clean'.  This
   is less surprising/confusing and the 'Build manifest_skip' action
   is now available instead to bootstrap the file [David Golden]

 Bug fixes:

 - Fixed runtime error on cygwin when searching for an executable command
   during installdeps testing [David Golden]

by dagolden at March 07, 2011 11:33 UTC

March 06, 2011

v
^
x

Ricardo Signesgetting the band (of demihuman heroes) back together

Last night was the first session of my otherwise long-running tabletop RPG game in over a year. Our old venue had become unavailable, last year, then other complications came up, and finally when things seemed like we could get things moving, there was just a lot of inertia to overcome. Finally, I sent out a grumpy, "Should we just call things off forever, or what?" Fortunately, I got quick responses from everyone: no, the previously mentioned date is good. Unfortunately, the date was in just a few days, and I only had the next session planned in broad strokes.

I had basically planned a dungeon crawl. Dungeon crawls are fun, but I find them difficult to run successfully. Producing a dungeon is, for me, quite a lot of work. I never quite feel like I've made the place plausible enough, and I don't think I end up with rooms of the right size, shape, or layout for particularly interesting encounters. I've begun to think that the right course of action is to start stealing liberally from published (and well-liked) modules. I've picked a few to read through, to see if the encounters and maps can be stolen. One complication is that I generally prefer to run games with my own bestiary, but 4th Edition makes it fairly easy to reskin whatever monsters they chose with something I'm happy with.

In the meantime, I started with the donjon random dungeon generator, futzed with settings for a long time, loaded it into Dungeonographer, and made some corrections. The problem with that approach is that then I've got to start with a blank map and rationalize each room. After making a pass at that, I started to realize that the room sizes were all sort of bizarre, and placing furniture was a mess. I ended up with a furniture-free map, which made combat just a little more bland. Next time, I'm going to start in Dungeonographer and design it all myself -- which means I'll probably start with a list of planned locations and encounters, and then add just enough connecting material to keep things sensible.

Planning encounters is tough, too. Interesting encounters are hard to build. In 4E, almost all combat encounters are long. (I have yet to actually try out Sly Flourish's 30 Minute Skirmishes ideas.) If the encounter isn't interesting and dynamic, the players will get bored, and by the end everyone will more or less be droning, "I use cleave on the one next to me" every round. The DMG advice to build encounters with monsters of different roles is really important advice. I need to get a much better feel for what groups of monsters make for an interesting encounter, and how to roll them out over an encounter. (I want to complain that this would be easier if there was a greater selection of low-level monsters -- but while I wish there was a better selection, I don't think I'd actually be any better for having more to choose from.)

We didn't finish the entire dungeon, unfortunately. We started later than we used to start, and there was a lot of small talk, as usual -- especially because we hadn't been together in one place for so long. Even if we'd started an hour or two earlier, we couldn't have finished things up in time. I need to get a better handle on the speed at which things go. It's really important because I'm trying to keep a good balance between having the overall campaign move at a reasonable pace, while not having each session be a rush through a seemingly-meaningless set of events. This is especially difficult because we play, at best, every three weeks. Having to split one adventure up over two nights delays the whole storyline by three weeks or more, which can be frustrating, at least to me.

I've been thinking about running an online campaign for a long time. I couldn't find any online battle mat that I liked, and this was a serious blocker for me to get started. A few weeks ago, Nick Perez (who had been complaining that I kept teasing him with the prospect of this game) took up the challenge and found Gametable, which has a weird, sort of ugly interface, but does absolutely everything I needed. The chances of this online game happening are greatly improved. I'm also excited by the notion that I'll be able to use Dungeonographer and Hexographer to build maps for the game, especially once a small feature request for Dungeonographer is fulfilled.

Before the game starts, though, I think I'll be able to use Gametable for a dual purpose: I can get used to running games with it, and I can run short one-combat sessions to try out new encounter ideas. All I need to do is plan some encounters, conscript some players, and carve out some time!

by rjbs at March 06, 2011 19:55 UTC

v
^
x

Sawyer XSo there's this TelAviv.pm meeting, right?

I wanted to write up on the February TA.pm meeting we had two weeks ago but kept delaying it. I think it's about time!

As with every TA.pm meeting, we try to mix both beginner and advanced talks, in order to have something for everyone. It's proven very effective so far. We've also started doing lightning talks, which I really wanted to do for a while.

The beginner talk was done by Gabor Szabo, giving an introduction on how to get started contributing to an open source project. There were a lot of laughs and a lot of fun. We also got to see new faces, and that's always great.

Then we had a round of lightning talks, one by Shlomi Fish on how to solve a very specific problem in a ton of different ways, and one by no speaker at all, on how to write a nifty website in under a minute using Dancer.

Shlomi's talk was a hell of a lot of fun. Considering how difficult it is for Shlomi to do a talk, I think he was very brave at attempting a lightning talk and I think it really paid off. People were really enjoying themselves and having a blast. I would attribute it to a good-spirited crowd and an amused - yet humble - speaker.

After the lightning talk, I gave a talk on the differences in Perl variable definitions, mainly explaining each of (and the difference between) "my", "our", "local" and "state".

When the talks ended, we went to a local restaurant to enjoy a dinner and have some more fun.

I want to thank the speakers and everyone who came. I hope to see you all (and those who didn't come last time) in the next meeting!!

Slides:

If you fancy doing a talk, whether beginner or advanced or a lightning talk, hit me up.

Here's to more fun!

by Sawyer X at March 06, 2011 08:36 UTC

March 04, 2011

v
^
x

CPAN TestersCPAN Testers Summary - February 2011 - A Saucerful of Secrets

Much of February was taken up with monitoring updates and watching for any unfortunate consequences. Thankfully the improvements seem to have done their job. The report submissions in January dropped from previous months, which is normal going on past experience, and sure enough the submissions increased again last month. Despite this the builder has managed to stay on top of the page requests. Some fine tuning has taken place and currently the builder stays at most about 2-3 days behind, but is average in only 1-2 days behind. We'd prefer to have updates even more frequent than this, so over the next few months we'll investigate further what improvements can be made.

Recently David Golden had cause to investigate a problem that was surfacing with Module::Build. Some reports to CPAN Testers were highlighting a particular issue that was proving hard to track down. Thankfully, Chad Davis went the extra miles to try and provide David with as much context as possible to understand the problem. It is worthwhile reading David's full post, appropriately title 'How to replicate a failure', as it provides a good example of how testers and authors can work together to solve problems.

The 2011 QA Hackathon is now firmed up, and although work integrating perl smoke test reports into the Metabase is planned, there is nothing specific to CPAN Testers. This is mainly due to David and myself being unable to attend in person, although we both hope to be online at some point during the event.

To end off this summary, the mappings this month included 8 total addresses mapped, of which 7 were for newly identified testers. It seems testers are getting used to reusing their metabase profile, rather than creating a new one every time they change email address, as this was the intention behind the profile, allowing us to more easily attribute reports to a particular tester.

If you have any CPAN Testers related news, blog posts or if you are planning any CPAN Testers related talks at your local Perl Monger group or at a workshop or conference, please let us know, and we'll promote you here on the blog. Until next time...

Cross-posted from the CPAN Testers Blog.

by CPAN Testers at March 04, 2011 15:40 UTC

March 03, 2011

v
^
x

Justin MasonLinks for 2011-03-03

by dailylinks at March 03, 2011 18:05 UTC

March 02, 2011

v
^
x

Jonathan SwartzA faster memory cache for CHI

I’ve released CHI 0.41 with a RawMemory driver. It’s like the regular Memory driver except that data structure references are stored directly instead of serializing / deserializing. This makes the cache faster at getting and setting complex data structures, but unlike most drivers, modifications to the original data structure will affect the data structure stored in the cache, and vica versa. e.g.

   my $cache = CHI->new( driver => 'Memory', global => 1 );
   my $lst = ['foo'];
   $cache->set('key' => $lst);   # serializes $lst before storing
   $cache->get('key');   # returns ['foo']
   $lst->[0] = 'bar';
   $cache->get('key');   # returns ['foo']

   my $cache = CHI->new( driver => 'RawMemory', global => 1 );
   my $lst = ['foo'];
   $cache->set('key' => $lst);   # stores $lst directly
   $cache->get('key');   # returns ['foo']
   $lst->[0] = 'bar';
   $cache->get('key');   # returns ['bar']!

It should work well as a short-lived L1 cache in front of memcached, for example.

I was motivated to create this by Yuval Kogman’s Cache::Ref, which is still a little faster (not having some of the overhead of CHI’s metadata and features). See the CHI benchmarks, also new with this release.

by Jonathan Swartz at March 02, 2011 23:35 UTC

v
^
x

Justin MasonLinks for 2011-03-02

by dailylinks at March 02, 2011 18:05 UTC

v
^
x

brian d foyYour Fantasy Perl Conference schedule

What talks do you want to hear at conferences? I recently finished going through the proposals for OSCON, and the ones I thought were really good were the ones the Perl committee solicited or helped to develop. That is, instead of waiting to see what speakers suggested and lived with that, we went out to get what we wanted. And, I think this is going to be a pretty good OSCON for Perl.

At Frozen Perl, Chris Prather, one of the organizers for YAPC::NA 2011, was telling me about their plan to schedule talks first and find speakers later. I was just checking the YAPC::Rīga website, and They are doing the same thing.

As a frequent speaker, I've lately been asking the conference to suggest topics to me. It's more challenging for me that way and raises my game a bit. Maybe that will work for other speakers as well.

So, imagine your fantasy conference (unrestrained Perl conference schedule, not conference about fantasies). You're the organizer. What would you put in the schedule and who would you assign the talks to?

I have a long list, but I'll give you a couple of them:

  • Surviving Perl and Unicode (Tom Christiansen)
  • Fast data with PDL (???)
  • Perl and the new HTML5 (???)
  • Git Boot Camp (Scott Chacon)

Suggest your own three topics, maybe with speakers attached, and perhaps the organizers can twist some arms. Chris did mention that he initially had to twist arms but his targets eventually relented.

by brian d foy at March 02, 2011 14:42 UTC

v
^
x

brian d foyA Module::CoreList for vendor distributions

How much do perl distributions diverge from or augment the Standard Library? Lately I've been doing a lot of work with distributions that augment their standard Perl installations, so although I'm restricted to the distribution's Perl and its modules, most of the good stuff is already there. However, we don't have a tool like Module::CoreList that knows about vendor distributions.

Although I don't have the time to write it myself, I'd really like to have a tool that can report module presence and version for either the current operating system or any that I name:

$ corelistng --debian -a Scalar::Util

$ corelistng --macosx -v 5.10.0

This would be pretty handy when I have to put together a private MiniCPAN.

by brian d foy at March 02, 2011 13:04 UTC

v
^
x

brian d foyMy Frozen Perl 2011 Keynote

I've uploaded the slides for my Frozen Perl 2011 keynote, in which I answer one part of the question "What are five things I hate about Perl?"

You may remember that I first asked that question in the introduction to Mastering Perl, so I've been thinking about this since 2005. I posted it on use.Perl in 2007 and Stackoverflow in 2008 (and Jeff Atwood picked up on for Stackoverflow Podcast #73 (around minute 47), although I'm not sure that. I might have picked it up from 5 things I hate about Ruby, which is about the same time that I would have been writing that for Mastering Perl.

Almost everyone fails this question though (and Jeff's answers are very weak). Most people don't think about it long enough, so they answer with very superficial, stylistic things that don't prevent them from doing anything but that is just their pet peeve.

In my keynote, I note that this interview question evaluates three things at once: real experience, depth & reach, and workarounds. How much have you actually thought about it, how much does that item actually affect the language and what you can do with it, and how you workaround it.

My answer is how use works. It takes a namespace and translates it into a filename, then traverses @INC looking for that filename, using the first one it finds. This has far reaching consequences:

  • You can only load a module that lives in a file of the corresponding name.
  • The direct correlation to a particular filename makes it virtually intractable to store multiple version of a module at the same time (a goal for Perl 6).
  • The filename is sometimes related to the distribution name, but sometimes not. For instance, how many people know which distribution HTTP::Request is in? How would you find out just by looking at the file?
  • Since you have to have a particular filename, your distribution structure is inaffected
  • Since they are just files, once installed you can't really tell which ones come from the same distribution.
  • We have to have PAUSE permissions to control who gets to create a file through a distribution through a CPAN client.
  • Since a distribution name is not necessarily related to the filename of the module, CPAN clients need a way to translate that. PAUSE jumps through a lot of hoops to index distributions, and CPAN clients have to get at that data (although cpanm does an end run in what I think is ultimately unsustainable).

So, a small design decision impacts quite a bit.

by brian d foy at March 02, 2011 12:56 UTC

March 01, 2011

v
^
x

Jonathan LetoParrot Embed Grant Update #3 : Now with Dragons

The quest to improve test coverage for src/extend_vtable.c has continued. Some dragons were slayed, a few trolls were paid tolls to cross creaky bridges of abstraction and many siren calls to hack on other code were dutifully ignored (mostly).

This TPF grant has forced me to become very familiar with Parrot vtables (virtual tables), which is basically an API for talking to Parrot PMCs (really just objects with a funny name). PMC can stand for Parrot Magic Cookie or PolyMorphic Container. Take your pick.

Firstly, vtable is already slang for "vtable function", which expands to "virtual table function." What the junk is a "virtual table function" ? I find that the simplest way to think about it is that every PMC has slots or buckets with standardized names such as get_bool (get Boolean value) or elements (how many elements does this PMC have?)

All PMCs inherit sensible defaults for most vtables, but they are allowed to override them. Why would you want to override them? As a simple example, let us assume there is a vtable called length (there isn't actually, but it makes an easy example to explain these concepts). Our length vtable will act just like elements and tell us how many elements a PMC has. If we had a complex number PMC that was really just an FixedFloatArray PMC of two numbers underneath, the length would always return 2 for every complex number. Not very useful.

A much more useful length vtable would use the coefficients a and b from a + b*i and compute the Euclidean distance (length from the origin) sqrt(a^2 + b^2). Hopefully you now have a taste for what what vtables are about. Parrot PMCs have over 100 vtables that can be overridden to provide custom functionality.

I recently ran across the hashvalue vtable and couldn't find any tests for it in Parrot core (outside of the test that I had written for it in extend_vtable.t) or any use of it in Rakudo Perl 6. Oh noes! It seemed like an unused/untested feature, so I created a Trac Ticket to mark it as deprecated so it could be removed in a future release.

The discussion about the ticket was fierce. NotFound++ explained why the vtable was important and the mighty coding robot known as bacek++ manifested tests quickly.

Yet another case of this grant work having a positive impact on the Parrot codebase, even outside the embed/extend interface. I also improved an error message in the PMCProxy PMC, which arises when something goes bad during a partial re-compile. Yay for improved debuggability!

According to the current code coverage statistics, extend_vtable.c is up to 54% coverage from 43%, which is not quite where I predicted from my last update. No doubt this has something to do with me packing and preparing to move to a new house this month. My velocity didn't decrease so much as the amount of time that I had to work on this grant.

I am greatly enjoying working on this grant and even if it is going a bit slower than I had planned, I am very confident that it will be completed in the next few months and hopefully sooner.

by Jonathan Leto at March 01, 2011 21:37 UTC

v
^
x

Justin MasonLinks for 2011-03-01

v
^
x

David GoldenSlouching towards Module::Build 0.38

I've just released the latest development release of Module::Build, version 0.37_06. You can install it from your favorite client as DAGOLDEN/Module-Build-0.37_06.tar.gz

I am not aware of any more issues blocking the release of 0.38, so unless something new and serious comes up, I plan to release the next stable Module::Build around the end of the week.

If you use Module::Build and have not tested your distributions with any of the 0.37_XX releases, please do so now as this may be your last chance.

by dagolden at March 01, 2011 03:07 UTC

February 28, 2011

v
^
x

Sebastian RiedelA logo for Perl 5

Last week Mark Keating from the Enlightened Perl Organization asked for help creating marketing material for Perl. So in response to that i've decided to spend the weekend doing something i wanted to do for quite some time now, design a logo for Perl 5.

You might remember this post from last year when i tried the same for Perl 6, it sadly did not turn out so well due to lack of interest from the decision makers and ultimately ended up driving me away from that community. This time however the overall experience was much better and i was positively surprised by how pleasant working together with the EPO folks has been. I'm sure there will be more cooperation in the future.

With its 23 years Perl is already one of the older programming languages, some have even called it a dinosaur and dead. But there has been no great dying in the world of programming languages and the Perl community is actually healthier than ever.

So lets assume Perl was a dinosaur that survived to dominate the food chain. It surely has to be a carnivore judging by its razor-sharp regular expressions, and the 20.000 distributions on CPAN show us that it hunts in packs. After a careful review of Jurassic Park i've come to the conclusion that Perl has to be a Velociraptor, and that's what i'm basing the logo on!

Perl5logo

Raptors are not just dinosaurs, they are pretty damn badass dinosaurs and have already been used on marketing material for the London Perl Workshop in the past. Here's also a little Mojolicious poster demonstrating its proper use in marketing material.

Mojoliciousposter

The logo has already been released to GitHub under the CC-SA license, that means it is open and as long as you attribute it properly you are free to do pretty much whatever you want with it. More formats and sizes will be added over time.

Shirts

I've also prepared a few t-shirts, you should take a look (americans go here). :)

Disclaimer: Some parts of this story might be fiction.

Permalink | Leave a comment  »

February 28, 2011 14:22 UTC

v
^
x

Curtis PoeMore stupid testing tricks

For the guy who wrote the test harness currently ships with Perl and has commit rights to an awful lot of the Perl testing toolchain, I sure do seem to do a lot of stupid things while testing. That being said, sometimes I need to do those stupid testing tricks. That's because there seem to be roughly two types of developers:

  • Those who work in a perfect world
  • Those who work in the real world

I say the latter with a bit of bitterness because invariably I keep hearing YOU MUST DO X AND NOTHING ELSE where "X" is a practice that I often agree with, but it's the "and nothing else" bit that really frosts my Pop Tart (tm).

I'm in the rather unfortunate position of having an NDA so I can't exactly explain what's driving a particular use case, but I have a fantastic job which nonetheless has some serious constraints which I'm not in a position to deviate from. So not only am I not in a position to follow best practices in what I'm about to describe, I'm not even in a position to tell you why. Suffice it to say that I have an enormous system which I'm faced with and many things which I would take for granted in other environments are not the case here, so I'm forced to improvise. (Note that I didn't say it's a bad system. It's a different system and there is at least one fundamental assumption about software development which doesn't apply here, but I can't say more)

So let's say that you have a rather large dataset you're testing and you have some contraints you must face:

  1. You have no control over the actual data
  2. You cannot mock up an interface to that data
  3. The data is volatile

How do you test that? Let's say a function returns a an array of array refs. At first, I tried writing something like the Levenshtein edit distance for data structures, but our data is so volatile that instead of having the tests fail the day after they're written (the data I test against is more-or-less stable for one day), I could have them last several days before failure hits.

Still, coming back a week later and still having the tests fail is not good. Further, by the time the data bubbles up to me, the criteria by which it's assembled and sorted is not present, so I have no way of duplicating that in my test (and it's complex enough that I wouldn't want to duplicate it).

Thus, I'm stuck with the awful problem of tests which are going to break quickly. I thought about the excellent Test::Deep, but that can let me validate the structure of the data, not the meaning. Test::AskAnExpert could let me know the meaning by punting to the human (me, in this case), but this doesn't do anything about the data being so volatile.

So I've written the abysmally stupid Test::SynchHaveWant. The idea is that the results you want are in the __DATA__ section of your .t file and if the test(s) fail, you can look at the failures and if they're not really failures, you can then "synch" your "wanted" results to the new results and watch them pass again. We do this by writing the synched results to the __DATA__ section.

For example: let's say that commit X on Feb 3rd is a known good commit, but your tests are now failing on Feb 27th. Roll your code back to X and rerun the tests. If they fail in the same way, you can assume that it's merely data changes. Simply "synch" your test data, rerun the test to verify, then checkout "head" again and make sure the tests pass.

This is an incredibly bad idea for several reasons:

  • Simply asserting that the results you want are the results you got is begging for laziness and false positives.
  • Rewriting your source code on disk is very stupid.
  • The data you want is now in the __DATA__ section, pulling it away from the code which should have it, masking the intent.
  • It's still a lot of manual work when there are failures.

All things considered, this is probably one of the dumbest testing ideas I've had, but it's working. I've a few more ideas to make it easier to use, but I'm still trying to figure out a cleaner way of making this work.

by Ovid at February 28, 2011 09:31 UTC

February 27, 2011

v
^
x

David GoldenFixing screen brightness on Thinkpad X201 and Ubuntu 10.10

I just tried installing Ubuntu 10.10 on my Thinkpad X201. For the most part, everything worked right away and on an SSD drive, it felt blazingly fast.

However, the screen brightness controls did not work. Some online research suggested adding acpi_osi=Linux to the kernel boot parameters and I also found a tip in the thinkpad-acpi driver documentation for how to force-enable it (over the native Linux ACPI driver).

First, edit your /etc/default/grub file to add acpi_osi=Linux and acpi_backlight=vendor arguments to GRUB_CMDLINE_LINUX_DEFAULT. It should look something like this:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_osi=Linux acpi_backlight=vendor"

Then run sudo update-grub to enable the new arguments. After a reboot, my brightness controls were working.

The other thing I noted was a strange error on boot: "failed to get i915 symbols". I found a solution for that, too:

sudo -s
echo "i915" >> /etc/initramfs-tools/modules
update-initramfs -k all -u
reboot

For more general information about Linux on the Thinkpad X201, see ThinkWiki.

by dagolden at February 27, 2011 21:05 UTC

February 26, 2011

v
^
x

Leo Lapworthhttp://yapc.eu/ - thanks to all involved

I'd like to publicly thank Bram, the previous owner of http://yapc.eu/.

I asked if we could transfer it over to the Perl NOC team so that it becomes an official community resource (he was already redirecting it to the yapceurope.org domain).

He was more than happy to do so, and has been so helpful in the process.

At the same time I'd like to thank the Perl NOC guys for taking this on. You probably don't realise just how much infrastructure these two guys run on our behalf, and how much more they are taking on!

I'd also like to thank the ACT team who run most of the Perl conferences websites and have setup yapc.eu on their server.

http://yapc.eu/ - points to the http://www.yapceurope.org/ site.

http://yapc.eu/year will redirect you to the relevant historical sites.

I hope that for 2012 and future YAPC EU's the site can be hosted directly off this domain and that having this consistency will help everyone remember (and advertise) a single address (improving google and awareness in general).

Thanks for being such a great community to be a part of.

by Ranguard at February 26, 2011 13:25 UTC

February 23, 2011

v
^
x

David GoldenHow to replicate a failure

Whenever I get a bug report or a CPAN Testers FAIL report and the issue is not obvious right away, the first thing I try to do is replicate the failure. Without it, I'm left to diagnose with hunches and release new code that I hope fixes that problem. That's not software engineering, it's software faith healing.

Today, I had a wonderful experience that I want to hold up as an example of what makes my life much easier. For a while now, I've been getting very bizarre FAIL reports for the latest Module::Build development releases from one particular CPAN tester, Chad Davis. The symptom is that an outdated module is picked up in Chad's local::lib, even though CPAN.pm appears to have detected (and satisfied) the missing dependency. What is particularly weird is that Chad later reported that an explicit "test ..." of the dependency followed by "test ..." of the Module::Build development release then passes all tests.

My first attempts at replication failed, so after a few emails back and forth, Chad sent me detailed instructions for replication. Here is an excerpt from his email (used with his permission):

I setup a new user, and deleted his .bashrc, leaving only the stock
Ubuntu 10.10 /etc/bash.bashrc in the environment. Then I created this
three-line ~/.bashrc :

perl5=/tmp/tmplib
lib="$perl5/lib/perl5"
eval $(perl -I"$lib" -Mlocal::lib="$perl5")

So, the environment now resolves to:

PERL5LIB=/tmp/tmplib/lib/perl5/x86_64-linux-gnu-thread-multi:/tmp/tmplib/lib/perl5
PERL_LOCAL_LIB_ROOT=/tmp/tmplib
PERL_MB_OPT='--install_base /tmp/tmplib'
PERL_MM_OPT=INSTALL_BASE=/tmp/tmplib

Notice that Chad describes the setup in detail, and even went to the trouble of replicating it with a "clean" user. This made it very easy to set up exactly the same situation on my development machine.

Next, Chad walked me through how to replicate the issue and even confirmed it on a fresh virtual machine!

Then I installed Parse::CPAN::Meta 1.42 (with CPAN 1.9402 and
local::lib 1.008), then I tested MB 0.37_04 which gave the same
errors. Then I quit the cpan shell, restart it, first do an explit
test of PCM 1.4401 before running a test of MB and all tests pass, as
before.

I also verified the same behavior on a virtual machine with a
fresh ubuntu 10.10 (with updates) and the same three-line .bashrc and
got the same behavior. I'm surprised that you cannot reproduce this.
At this point I don't believe there is anything left that is specific
to my environment.

With those clear instructions, I was able to replicate the issue.

Chad then took me through variations and outcomes:

I then upgraded CPAN to 1.94_65 but have the same errors on MB 0.37_04
unless I explicitly test PCM 1.4401 first.

And I have the same problems with MB version 0.37_05 as well, which
also works after an explicit test of PCM 1.4401

I tried to start working backwards a bit, MB 0.3624 is fine,
presumably because it doesn't depend on PCM.
However, MB 0.3701 fails, despite the fact that it only depends on PCM
1.42, which is the one that's installed, according to pmvers.  It
looks to be the same issue in each case: the existing PCM is not always detected.

Now I have a fact base that I can test and confirm. Finally, Chad did some further digging into the problem:

Then I looked at t/mymeta.t and traced back to CPAN::Meta and looked
at it dependencies in Makefile.PL to find that PCM 1.44 is listed both
as prereq and as a build prereq, which, being a novice, looked a bit
funny to me. Taking out the build prereq and leaving the prereq then
allowed me to test MB without errors.

And with that, I've got a decent workaround solution, as well as a hint as to what's going wrong in CPAN.pm. I haven't fixed CPAN.pm yet, but with this start, it's going to be much, much easier.

I want to thank Chad for his responsiveness and the incredible detail of his report. I hope it can be a lesson to anyone reporting bugs or responding to questions about failure:

  1. Describe your environment in detail. If you can, replicate the error with a "clean" user or even on a clean install of the OS.
  2. Describe the exact steps you took to replicate the failure. Take notes as you do it, so you can be as specific as possible.
  3. Describe any variations you tried that did work, or any workarounds you used to deal with the problem.

That seems like common sense, but few bug reports or failure investigations I get are as thorough and constructive as the example above. Thank you, Chad!

by dagolden at February 23, 2011 05:27 UTC

v
^
x

Ricardo Signeswherein I continue to fail at being a dungeon master

In 2005 or so, I started running a science fiction role-playing game, and it ran for a little under five years. I had a lot of fun, and I think the game had some merit, but I got frustrated with a lot of its failures and wrote a post mortem in which I put most of the blame on myself for a lot of problems that I brought down on my own. Below, I reproduce most of my report to the players:

(cut here)

Two thousand nine, year of the ox! Then begins our D&D game. I am excited about it. It is much better planned than Deliverance was, not only in general, but also because I have attempted to prepare a game that will address the specific problems with Deliverance.

Let's try and make sure I know what all those problems are, though, shall we?

Read the whole thing before starting to reply, if you're going to reply. "Campaign changes" is my phrase below for, "addressed by changes in the narrative structure of the game."

PACING: too much downtime in too many places

  • too much "real" time in the game when you are powerless
  • too much time spent in port going, "anything else you want to do here?"
  • too much pre-game smalltalk (especially given the during-game smalltalk)
  • large quantities of downtime eliminate any sense of the clock or calendar moving in game; how long since Game 1, in game? who knows? nobody? ugh.
  • dead time also contributes to a lack of a sense of distance

solutions: firmer hand with declarations of bullshit; campaign changes; players must be more proactive in declaring what random stuff they want to do with no "anything else?"

PACING: character advancement nearly non-existent

  • nearly nobody ever spends XP
  • no character has much of a personal plot beyond backstory
  • the party itself has not advanced much in its position or renown

Partly, this is due to pacing issues. It takes so long to get through one story that advancement would be doomed to be slow. It's also the case that when the story slows down, the first thing to get dropped are per-character plotlines.

There has also been a failure to acquire what I'll call, here, "henchmen." Opportunities have arisen and been mostly passed by, and other opportunities, persued, failed because of changes made to cope with pacing issues.

solutions: D&D has better built-in advancement mechanics; campaign changes; revaluation of XP (see below)

GAME PER SE: XP has no value

  • XP is given little importance
  • no reason is given as to why (or that) XP has been rewarded

I really wanted to run a game in which XP rewards were special and valued, but I did not commit to that up front, so not only do people not know how much XP they have, or want any to spend, people don't even realize when or why they receive XP. This is a major problem, because XP both drives character advancement and serves as a carrot to reward good play. This is a massive problem.

solution: I will return to my original plan and make XP rewards more public, important, and meaningful. Also, D&D nearly makes this mandatory.

SETTING: nobody can remember a damn thing

  • "You have an incoming call from Major Plot Actor." // "Who?"
  • "Wait, what's the place where we performed Massive Task?"

This has two root causes, I think: too much information, and too lazy players. I think that I have an easier time keeping track of everything in my head than you guys, partly because I have written it down. This leads to a lot of "wait, who is X again?" and then I get annoyed because X is so important and has shown up so often.

Sometimes I think the problem is that I have provided so many names and places, and sometimes I think it's because nobody is making an effort to keep track of the story apart from me. Maybe that's unfair, but it really seems that way to me. I've dropped a lot of minor characters and plotlines and other things both because of (again) pacing issues and to reduce the number of things to know, but this campaign just has a lot of balls in the air. When Odes Tem or Angu Treech become "who was that guy?" I feel like either I have utterly failed to tell a memorable story of like nobody was paying attention.

solutions:

  • The D&D game will be radically simplified, and I will only provide as many details as are vitally important -- unless you look for more.
  • The players will keep notes. I don't care how this works. Maybe one person will take them down and type them up. Maybe there will be a notebook that rotates between players for note taking. This isn't negotiable, though. I'm happy to answer questions about obscure points, but when the question is "Who was that Evil Emperor character again?" the players should be able to have this answer on hand. Failing this, I would rather just run a campaign from published modules.

SETTING: the desperate need for a map

This is stupid, but significant. I have a map. I use it all the time. I just never, ever, ever remember to bring it to the table. I need to do that. I need to build a map up front of vague details available to all player characters, provide it to each player, leave a few extra copies at the gaming venue, and then let the player notetaker add information as we go along.

Travelling fewer vast distances to exotic places with many cities will help too.

GAMEPLAY: too few challenges (not enough combat or die rolling)

This contributes to the devaluation of XP, character advancement, and skills. In the average game, for various reasons (including dead time) there are not enough die rolls. There is not enough chance to fail, meaning there is not enough chance to triumph. There are not enough conflicts in which the success or failure is clear: yes, you killed the guy; no, you didn't bypass the security system.

solutions: more combat, more clear-cut challenges, tie re-valued XP to these challenges; campaign changes

SETTING: players do not know what their characters would know

Unlike many fantasy settings, the nature of things in the current game's setting are not a known quantity. Difficult-to-answer questions include, "what lies within the realm of common technology?" and "what is the average person's day like?" and "what do people believe is the truth about some well-known entity?". This leads to the Defragulation Problem, where something that should be obvious to the PCs is not obvious to the player's.

I have tried to address this with the current game as we go along, but it is a very difficult problem, especially in a sci-fi setting. In a fantasy setting, the basics make this easier: obvious physical possibilities are possible (yes, you can throw a rock) and magic makes absolutely anything else possible. The parameters of magic's abilities are known only to magic users, if that. With no magic users in the party, it is totally reasonable that nobody knew that Meepo the Magician could shoot dragons from his nose. Even with magic users, things are pretty clear: anything is possible, and harder things require someone more powerful than simpler things.

I am going to make sure that setting specific information (deities, commerce, etc) are explained to each player sufficiently, but for the most part:

solutions: fantasy setting instead of sci-fi setting solves this

(cut here)

The D&D game that followed was on hiatus for 2010, for various reasons, but we're going to start back up again, so I've been working on my notes and planning again, and I've begun to realize that in almost every way, I have failed to follow through on my plans to right my mistakes.

After about ten sessions of the new game, I can't name a single recurring NPC of note. There has been one reused location, to call it a reused setting is stretching the truth. In almost all ways, the game's improvements have been due to the switch to fantasy rather than science fiction. Meanwhile, I think I have made worse mistakes with character and setting than I made previously. Part of this has been related to pacing. I expected the half of the heroic tier to take about half as long as it did, and I did nothing to compensate for the fact that it really stretched out.

The rest, though, is just foolishness on my part. I took my eyes off of my design priorities, and I ended up letting the party do a lot of noodling around that didn't really establish any plot or setting advancement that the players could understand. The really silly thing is that when I was running the sci-fi game, for much of the time I was running two groups. One was a group of itinerant mercenaries, and the other was a special operations group working in one city. The special ops group was a clear demonstration of the power of familiarity. They established safehouses, got to know neighborhoods, and got more and more of a sense of place. Meanwhile, the mercenaries never spent much time in any one place, and the game felt like a bunch of disconnected and unreal locations.

As I work on planning the rest of my D&D game's heroic tier (using Scrivener, which is a fantastic tool for the job), I've been making one index card per likely session, and on each one I've tried to make reference to a location that can be reused, NPCs who can remain relevant, and other hooks that will help make the game world seem like an organic whole. It's not good enough that I, the DM, know how things fit together. The players need to get a sense for it, too.

More than that, they need to get the sense, more and more, that they aren't just moving in the game world, but that they are directing it. At first, of course, they're going to be smacked around by fate and kobolds, but as they climb in level, you can't just replace kobolds with demons and have the heroes spend all their time getting kicked around. They have to be heroic, and that means that people need to stand in awe of them. For that to happen, there have to be people, and they can't just be shopkeepers who seem impressed. I'm trying to make sure that the PCs will have ample opportunity to feel awesome. Smackdowns need only occur once in a while, and will be much more fun when they do.

I think I'm going to end up having to do quite a lot of revision to my current plans to make things work, but I think it is going to be extremely rewarding -- if the darn game ever starts up again.

by rjbs at February 23, 2011 03:40 UTC

v
^
x

Leo LapworthPerl memory management...

Does Perl run out of memory?

Today I got an email from someone saying “I was told by a person who used Perl for computational genomics applications that it was running out of memory, so he switched to C++. What’s your thoughts on running out of memory in Perl?”

Just for posterity here is my reply (please note I’m no expert on this sort of thing and have never had the problem) was…


Perl has a great garbage collector. But of course if you read in a 1 GB file into memory, then you are using 1 GB, whatever language you use.

So the trick is to read in line by line and process the information that is required. This isn’t always possible, but there could be a few other issues which your college didn’t understand. For example you should not pass large data structures around, you should pass references to them, otherwise they get copied.

my @a_big_list = qw(lots of stuff);
bad(@a_big_list);
good(\@a_big_list);

sub bad {
  my @copy_of_list = @_;
  foreach my $thing (@copy_of_list) { ... }
}

sub good {
  my $list_reference = shift;
  foreach my $thing (@$list_reference) { ... }
}

The other one is if you use global variables all the time, instead of locally scoped variables (which are garbage collected when they go out of scope) you will have lots of extra memory used.

Check out http://www.onyxneon.com/books/modern_perl/ (free) — see “Array References” section for more info.

Programs will run out of memory if the coder doesn’t fully understand what they are doing, no matter what language.

You may also be interested in checking out http://www.bioperl.org/ and asking questions on their IRC channel. Perl is used for mass data processing by these guys so they might have further insights.

by Ranguard at February 23, 2011 00:32 UTC

February 21, 2011

v
^
x

Jonathan SwartzAnnouncing Mason 2

I’m pleased to announce Mason 2, the first major version of Mason in ten years.

For those not familiar with it, Mason is a templating framework for generating web pages and other dynamic content. Mason 2 has been rearchitected and reimplemented from the ground up, to take advantage of modern Perl techniques (Moose, Plack/PSGI) and to correct long-standing feature and syntax inadequacies. Its new foundations should allow its performance and flexibility to far exceed Mason 1.

Though little original code or documentation remains, Mason’s core philosophy is intact; it should still “feel like Mason” to existing users.

I’ve talked about plans for Mason 2 here before, but as things have changed in the past year and a half, here’s an updated summary:

  • Name. The name is now Mason, instead of HTML::Mason.

  • Component classes. Each component is represented by its own (Moose) class, rather than just an instance of a common class. This means that components have their own namespaces, subroutines, methods, and attributes, and can truly inherit from one other. See Mason::Manual::Components.

  • Filters. A single powerful filter syntax and mechanism consolidates three separate filter mechanisms from Mason 1 (filter blocks, components with content, and escape flags). See Mason::Manual::Filters.

  • Plugins. Moose roles are utilized to create a flexible plugin system that can modify nearly every aspect of Mason’s operation. Previously core features such as caching can now be implemented in plugins. See Mason::Manual::Plugins.

  • Web integration. Mason 1’s bulky custom web handling code (ApacheHandler, CGIHandler) has been replaced with a simple PSGI handler and with plugins for web frameworks like Catalyst and Dancer. The core Mason distribution is now completely web-agnostic. See Mason::Plugin::PSGIHandler.

  • File naming. Mason now facilitates and enforces (in a customizable way) standard file extensions for components: .m (top-level components), .mi (internal components), and .pm (pure-perl components).

See Mason::Manual::UpgradingFromMason1 for a more detailed list of changes.

Mason 2 is obviously still in alpha status, but it has a fair sized test suite and I’m eager to start building web projects with it. I hope you’ll give it a try too! Post feedback here or on the Mason user’s list.

by Jonathan Swartz at February 21, 2011 17:19 UTC

v
^
x

Perl NOC LogNow hosting the master mirror for CPAN

15 years ago Jarkko Hietaniemi started CPAN; now arguably the most important feature of Perl.  For all that time the canonical CPAN has been hosted at FUnet, with (now more than 600) mirrors around the world.  Having made much of my living using Perl I'm incredibly grateful to Jarkko and FUnet for having built and maintained this incredible resource for such a long time.

A few months ago Jarkko started passing the baton for looking after CPAN on to others in the community and as part of that we at perl.org are taking over the task of being the "master mirror" for CPAN.  Currently almost 500 of the CPAN mirrors are mirroring straight from FUnet which is an incredible resource drain for the master mirror.

The new system will be using the rrr tool for rapid mirroring to a set of "tier 1" mirrors so we more easily can scale to support anyone who wants to mirror CPAN.  File::Rsync::Mirror::Recent is already in use by some of the mirrors to get "instant" updates from PAUSEAndreas König (the inventor and long time maintainer of PAUSE) is working on some improvements to make it work better for cpan.org in general.

We're also working on getting things in place so the static pages on cpan.org more easily can be maintained and updated by the community.

If you are interested in helping with testing the mirroring process or anything else, please subscribe to the cpan-workers mailing list.

- Ask

by Ask Bjørn Hansen at February 21, 2011 01:02 UTC

February 19, 2011

v
^
x

Curtis JewellIt's the fact that the smoke machine had an old...

(but not crotchety) CPU that caused the problems with Math::BigInt::GMP.

The fix is to change out the libgmp library, and this will be tested later today.

See http://hg.curtisjewell.name/Perl-Dist-Strawberry/rev/68bfeebf36f0 or https://fisheye2.atlassian.com/changelog/cpan?cs=13768 for more information.

In other news, we've fixed the 'missing-README' bug - the README file (and a few links, as well) were accidental casualties of the 'pluggable-Perl-version' move.

So hopefully I'll be able to build a Beta 1 this weekend.

February 19, 2011 10:27 UTC

February 18, 2011

v
^
x

Jeffrey KeglerPerl and Parsing 7: Do List Operators have Left/Right Precedence?

Chiral Operators

In actual usage, the syntax of Perl's list operators is quite natural. Descriptions of that syntax, however, tend to be awkward.

The current practice is to describe this syntax in terms of "left precedence" and "right precedence". In other words, list operators are said to be chiral. I have problems with the Chiral Interpretation of list operators. The most serious of these: the Chiral Interpretation does not actually account for the behavior of expressions that contain list operators.

In this post, I assume you have a working knowledge of one or more list operators (examples are join and sort). The most authoritative account of the Chiral Interpretation is in the perlop man page.

Our Example

The rest of this post will use a single example:

sub f { say $_[0]; return $_[0]; }
say join ';', $a = f(1), $b = join ',', $c = f(2),
    $d = join '-', $e = f(3), $f = f(4);

Here's the output:


1
2
3
4
1;2,3-4

What is Precedence?

Precedence is a concept familiar from ordinary arithmetic. In school we learned that, in the expression

   1+2*3+4


the 2*3 should be multiplied out first to yield 6, before either of the two additions are performed. Multiplication has higher prececedence than addition.

Precedence is a hierarchy. There is an order, from high to low, and each operator has a distinct place.

Some cases are tricky. The same symbol is often both a unary operator and a binary operator. It's very common for the ASCII hyphen-minus sign ("-") to act as both a unary negation operator, and as a binary subtraction operator. The precedence of the unary operator can be different from the precedence of the binary operator, and often is. But while the unary and binary operators may share the same symbol, they are considered to be distinct operators.

If we accept that list operators have a left and a right precedence, as the perlop man page does, that would be an outright exception to the hierarchical ordering of operators by precedence. This points to a potential problem in defining left and right precedence. But that is not the most serious issue with the Chirality Interpretation. So that I can go straight to my main point, let's assume that there are no issues in defining left and right precedence. For now, let's just say that "I can't tell you what the difference between left and right precedence is, exactly, but I know it when I see it".

Let's ask instead about the precedence of operators other than the list operators in expressions which contain list operators.

Comma Operators versus Assignments

Look at the assignment and comma operators in the example above. Ask this question: Does the comma have a higher or lower precedence than the assignment operator?

According to the perlop man page, assignment has a higher precedence than the comma operator. But in the example above, this is not always true. Here are values of the variables after the example is executed:


$a=1
$b=2,3-4
$c=2
$d=3-4
$e=3
$f=4

For the assignments to $a, $c, $e, and $f, things are as perlop says -- those assignment operators have higher precedence than all the commas.

But for the assigment operators in the assignments to $b, and $d, things do not behave as advertized. True, those assignments still have higher precedence than the commas to their left. But the assignment of $b has lower precedence than the commas to its right. The same is true of the assignment to $d.

Chirality is Contagious?

What seems to be happening is that not only are list operators showing chirality, but that chirality is spreading to other operators. The perlop man page does not really prepare us for this.

The Grouping Operator Interpretation

Now let's add parentheses, so that they clarify the syntactic groupings without changing them:


say join ';', $a = f(1), $b = (join ',', $c = f(2),
    $d = (join '-', $e = f(3), $f = f(4))); 

With this the conceptual problems disappear. Why? Because parentheses are recognized as a grouping operator. That is, we know that, regardless of the precedence hierarchy among operators, operations inside parentheses will take precedence over operations outside the parentheses. Parentheses also have two different precedences, but they are not chiral -- parentheses have an internal and an external precedence.

The parentheses suggest a better way to describe Perl's list operators. We can think of the list operators as a special kind of grouping operator.

  • Just as a grouping begins before a left parenthesis, a grouping starts just before the list operator.
  • Just as with parentheses, operations inside a grouping take precedence over those outside.
  • Unlike parentheses, the grouping begun by a list operator is not closed explicitly. The grouping started by a list operator ends just before the next operator which has a precedence lower than the internal precedence of the list operator.
  • If, in an expression, no operator after the list operator has lower precedence, then the grouping ends at the end of the expression.
  • The internal precedence of list operators is between the precedence of the Perl comma operator and the precedence of Perl's logical not operator. This is higher than the internal precedence of parentheses. In the current perlop man page this is said to be the "rightward precedence" of list operators.
  • The external precedence of a list operator is the same as the precedence of a Perl term. This is the same as the external precedence of parentheses. In the current perlop man page, this is said to be the "leftward precedence" of list operators.
  • List operators do not have chirality.

Other Problems with Chirality

Operator Chirality is Hard to Define

Above, I deferred the question of how to define left and right precedence. Now I'll come back to it.

Giving the same operator two different precedences violates the textbook definition of precedence. Precedence is a hierarchy. Chiral operators break that hierarchy.

Consider an operator which is to the right of one list operator, but to the left of another list operator. How do you assign it a precedence?

Grouping operators also break the hierarchy, but they do it in a well-defined way. You could modify the Chiral Interpretation so that it is equally well-defined. But I think, if you do so, you'll find you've reinvented grouping.

Operator Chirality is Hard to Describe

Find a Perl book that describes list operator precedence. There are several excellent ones, by experts. Ask yourself: If I were a newbie, and I carefully studied these paragraphs, would I know list operator syntax cold? Or would there still be a lot of cases where I was not sure? The answer to this must be subjective, but my own observation is that many a lucid account of Perl bogs down when it is time to describe the syntax of list operators.

Operator Chirality is not in the Textbooks

"Left precedence" and "Right precedence" certainly sound like academic terms, but to my knowledge they are nowhere in the academic literature. As far as I know, chiral operators are an "ad hoc" explanation invented and used exclusively in attempts to grapple with Perl's list operators.

Both the Chiral Interpretation and the Grouping Interpretation involve giving the same set of operators two different precedences. The difference is that the behavior of grouping operators is well understood and has been carefully documented in the academic literature.

The Perl tradition is not to fret excessively about theory. But when the descriptive going gets tough, it is nice to have theory to fall back on.

Notes

Note 1: The academic literature on parsing is large, and it is risky to assert that something is not "Out There" somewhere. But there's no sign of "left precedence" and "right precedence" in the very comprehensive Grune & Jacobs, Parsing Techniques: A Practical Guide - Second Edition.

by Jeffrey Kegler at February 18, 2011 20:20 UTC

v
^
x

Leo LapworthActiveState PPM index + download stats

ActiveState have updated their PPM index page:

http://code.activestate.com/ppm/

As reported http://www.activestate.com/blog/2011/02/ppm-index-new-way-browse-perl-packages.

It is interesting to see which are the popular downloads and as an author the number of downloads of your own modules. They also have a nice chart of which OS's the module has been built for.

Not so sure that the example in the article - DBD::Mysql Failing on OSX - is such a good showcase! (looking at the report just seemed that mysql_config wasn't in the build servers path or something, so a setup issue, not a module issue).

But that aside it's interesting to see how far PPM seems to have come.

by Ranguard at February 18, 2011 08:33 UTC

v
^
x

Sebastian RiedelInterview about Mojolicious

Tara Gibbs interviewing yours truly for the ActiveState Blog.

On Saturday, Sebastian Riedel released version 1.1 of Mojolicious, the next generation web framework for Perl. We love Perl and web development here at ActiveState, so we contacted Sebastian for an interview.

Permalink | Leave a comment  »

February 18, 2011 07:24 UTC

February 17, 2011

v
^
x

Justin MasonAgainst The Use Of Programming Languages in Configuration Files

It’s pretty common for apps to require “configuration” — external files which can contain settings to customise their behaviour. Ideally, apps shouldn’t require configuration, and this is always a good aim. But in some situations, it’s unavoidable.

In the abstract, it may seem attractive to use a fully-fledged programming language as the language to express configuration in. However, I think this is not a good idea. Here are some reasons why configuration files should not be expressed in a programming language (and yes, I include “Ruby without parentheses” in that bucket):

Provability

If a configuration language is Turing-incomplete, configuration files written in it can be validated “offline”, ie. without executing the program it configures. All programming languages are, by definition, Turing-complete, meaning that the program must be executed in full before its configuration can be considered valid.

Offline validation is a useful feature for operational usability, as we’ve found with “spamassassin –lint”.

Security

Some configuration settings may be insecure in certain circumstances; for example, in SpamAssassin, we allow certain classes of settings like whitelist/blacklists to be set in a users ~/.spamassassin/user_prefs file, while disallowing rule definitions (which can cause poor performance if poorly written).

If your configuration file is simply an evaluated chunk of code, it becomes more difficult to protect against an attacker introspecting the interpreter and overriding the security limitations. It’s not impossible, since you can, for instance, use a sandboxed interpreter, but this is typically not particularly easy to implement.

Usability

Here’s a rather hairy configuration file I’ve concocted.

    #! /usr/bin/somelanguage
    !$ app.status load html
    !c = []
    ;c['sources'] = < >
    ;c['sources'].append(
        NewConfigurationThingy("foo_bar",
            baz="flargle"))
    ;c['builders'] = < >
    ;c['bots'] = < >
    !$ app.steps load source, shell
    ;bf_mc_generic = factory.SomethingFactory( <
        woo(source.SVN, svnurl="http://example.com/foo/bar"),
        woo(shell.Configure, command="/bar/baz start"),
        woo(shell.Test, command="/bar/baz test"),
        woo(shell.Configure, command="/bar/baz stop")
        > );
    ;b1 = < "name": "mc-fast", "slavename": "mc-fast",
                 "builddir": "mc-fast", "factory": ;bf_mc_generic >
    ;c['builders'].append(;b1)
    ;SomethingOrOther = ;c

This isn’t actually entirely concocted from thin air — it’s actually bits of our BuildBot configuration file, from before we switched to using Hudson. I’ve replaced the familiar Python syntax with deliberately-unfamiliar made-up syntax, to emulate the user experience I had attempting to configure BuildBot with no pre-existing Python knowledge. ;)

Compare with this re-stating of the same configuration data in a simplified, “configuration-oriented” imaginary DSL:

add_source NewConfigurationThingy foo_bar baz=flargle

buildfactory bf_mc_generic source.SVN http://example.com/foo/bar
buildfactory bf_mc_generic shell.Configure /bar/baz start
buildfactory bf_mc_generic shell.Test /bar/baz test
buildfactory bf_mc_generic shell.Configure /bar/baz stop

add_builder name=mc-fast slavename=mc-fast
     builddir=mc-fast factory=bf_mc_generic

Essentially, I’ve extracted the useful configuration data from the hairy example, discarded the symbology used to indicate types, function calls, data structure construction, and let the configuration domain knowledge imply what’s necessary. Not only is this easier to comprehend for the casual reader, it also reduces the risk of syntax errors, by simply minimising the number of syntactical components.

See Also

The Wikipedia page on DSLs is quite good on the topic, with a succinct list of pros and cons.

This StackOverflow thread has some good comments — I particularly like this point:

When you need your application to be very “configurable” in ways that you cannot imagine today, then what you really need is a plugins system. You need to develop your application in a way that someone else can code a new plugin and hook it into your application in the future.

+1.

This seems to be a controversial topic — as you can see, that page has people on both sides of the issue. Maybe it fundamentally comes down to a matter of taste. Anyway — my $.02.

(Update: discussions elsewhere: Proggit, HackerNews)

(Image credit: Turn The Dial by VERY URGENT Photography)

by Justin at February 17, 2011 23:15 UTC

v
^
x

Perl BuzzPerlbuzz news roundup for 2011-02-17

These links are collected from the Perlbuzz Twitter feed. If you have suggestions for news bits, please mail me at andy@perlbuzz.com.

by Andy Lester at February 17, 2011 15:46 UTC

v
^
x

Leo LapworthPortable plugin apps?

I might be working on a project for a friend shortly. They have lots of experience with developers who use Drupal - and I get the feeling (could be wrong) that you can semi-plug-and-play with Drupal?

So..
Want a wiki - use X plugin
Want a user system - use Y plugin
Want user profiles - use Z plugin
Want user gallery - use A plugin
Want a blog - use B plugin

So I started looking around, and there are plugins for specific frameworks, or standalone applications written in a framework, WebGUI CMS seems to have a lot of features but I want to use my framework of choice (and experience!).

Having been playing with Plack it got me thinking...

Just as Plack/PSGI sits between the webserver and your code, could someone (brighter than me!) come up with a standard for sitting between a framework and an app (blog/gallery/wiki/forum etc)?

The app would need to have a standard way of initializing (creating db tables etc), but with so much discussion of NoSQL databases that might not be so problematic. Maybe the apps could focus on just data - supplying default templates which can be overwitten in the framework.

Anyway I can see so many issues, but as I start this project I get the feeling I may have to reinvent some wheels (or at least remould some existing ones) and that just doesn't seem right!

by Ranguard at February 17, 2011 14:13 UTC

v
^
x

Curtis Poeis_almost()

I struggled with a problem where a given method would return an array of array refs of data, but the order (and sometimes presence) of array ref elements were sometimes slightly different. This is because this code needed to test real data and I could not mock the results. After giving this some thought, I realized I wanted something like the Levenshtein edit distance for data structures. Marcel Grünauer suggested that each element get assgined a unicode character. This solves my problem nicely with the following code ...

(Fair warning, this is a hack)

use strict;

use Test::More 'no_plan';

use Test::Differences;
use Data::Dumper;
use Text::WagnerFischer 'distance';

sub is_almost($$$;$) {
    my ( $have, $want, $threshhold, $message ) = @_;

    $message ||= 'The two arrays should be close enough';

    unless ( 'ARRAY' eq ref $have and 'ARRAY' eq ref $want ) {
        require Carp;
        Carp::confess(
            "First two arguments to is_almost() must be array refs");
    }

    local $Test::Builder::Level = $Test::Builder::Level + 1;
    if ( !@$want ) {
        if ( !@$have ) {
            pass $message;
        }
        else {
            eq_or_diff $have, $want, $message;
        }
        return;
    }

    my %char_for;
    my $index = 1;
    my ( $have_str, $want_str ) = ( '', '' );
    local $Data::Dumper::Indent   = 0;
    local $Data::Dumper::Sortkeys = 1;
    local $Data::Dumper::Terse    = 1;

    foreach my $element (@$have) {
        $have_str .= $char_for{ Dumper($element) } ||= chr( $index++ );
    }
    foreach my $element (@$want) {
        $want_str .= $char_for{ Dumper($element) } ||= chr( $index++ );
    }
    my $distance = distance( $have_str, $want_str ) / @$want;
    if ( $distance <= $threshhold ) {
        pass $message;
        if ($distance) {
            diag "Distance is $distance";
        }
    }
    else {
        eq_or_diff $have, $want, $message;
        diag "Distance is $distance";
    }
}
my $want = [
    [ 1, 'North Beach',       'au', 'city' ],
    [ 2, 'North Beach',       'us', 'city' ],
    [ 3, 'North Beach',       'us', 'city' ],
    [ 4, 'North Beach Hotel', 'us', 'hotel' ],
    [ 5, 'North Beach',       'us', 'city' ],
    [ 6, 'North Beach',       'us', 'city' ],
];
my $have = [
    [ 1, 'North Beach',       'au', 'city' ],
    [ 2, 'North Beach',       'us', 'city' ],
    [ 3, 'North Beach',       'us', 'city' ],
    [ 4, 'North Beach Hotel', 'us', 'hotel' ],
    [ 5, 'North Beach',       'us', 'city' ],
    [ 6, 'North Beach',       'us', 'city' ],
];
is_almost $have, $want, .20;

$have = [
    [ 2, 'North Beach',       'us', 'city' ],
    [ 3, 'North Beach',       'us', 'city' ],
    [ 4, 'North Beach Hotel', 'us', 'hotel' ],
    [ 5, 'North Beach',       'us', 'city' ],
    [ 6, 'North Beach',       'us', 'city' ],
];
is_almost $have, $want, .20;
$have = [
    [ 2, 'North Beach',       'us', 'city' ],
    [ 3, 'North Beach',       'us', 'city' ],
    [ 4, 'North Beach Hotel', 'us', 'hotel' ],
    [ 5, 'North Beach',       'us', 'city' ],
    [ 6, 'North Beach',       'us', 'city' ],
    [ 1, 'North Beach',       'au', 'city' ],
];
is_almost $have, $want, .20;
__END__
ok 1 - The two arrays should be close enough
ok 2 - The two arrays should be close enough
# Distance is 0.166666666666667
not ok 3 - The two arrays should be close enough
#   Failed test 'The two arrays should be close enough'
#   at almost.pl line 90.
# +----+------------------------------+----+------------------------------+
# | Elt|Got                           | Elt|Expected                      |
# +----+------------------------------+----+------------------------------+
# |    |                              *   0|1,North Beach,au,city         *
# |   0|2,North Beach,us,city         |   1|2,North Beach,us,city         |
# |   1|3,North Beach,us,city         |   2|3,North Beach,us,city         |
# |   2|4,North Beach Hotel,us,hotel  |   3|4,North Beach Hotel,us,hotel  |
# |   3|5,North Beach,us,city         |   4|5,North Beach,us,city         |
# |   4|6,North Beach,us,city         |   5|6,North Beach,us,city         |
# *   5|1,North Beach,au,city         *    |                              |
# +----+------------------------------+----+------------------------------+
# Distance is 0.333333333333333

It seems many of the "edit distance" modules struggle with unicode, so I played with different ones until I had one which led to results I considered vaguely satisfactory.

This is the first pass at a rough, rough hack. Suggestions welcome.

by Ovid at February 17, 2011 12:30 UTC

v
^
x

Sawyer XSyntax police?

In a small script someone wrote at work I saw the indirect pattern of new Object, instead of the more correct form of Object->new. When I inquired (okay, I said "WTF?!"), he said that he just copied the synopsis of a module. Oh, right.. some of the synopses (plural of "synopsis", bet you didn't know that!) still have some outdated syntax examples.

I'm not gonna write about how we should all update our PODs to remove syntax that hasn't been (or shouldn't have been, at least) written for the last 10 years, even though I should! I wrote to talk about the reply my co-worker got when he opened a ticket asking for the synopsis to be updated, per my suggestion.

He got the reply "what is this, the syntax police?" Perhaps half-jokingly, but still problematic, IMHO.

Police? No. Neighbors, family members and friends? Yes!

I wouldn't trust the police as far as I can throw a piano, and I don't even have a piano, so you can bet your ass I can't throw one very far! However, I do trust my friends and family (and maybe even my neighbors) to help me in a time of need, to advise me, to assist me, to care about me getting a better result. (Also, if you're in Texas, it's more likely that your neighbor will have much more firearm than your local police department)

How have some people become so bitter towards their community members when suggesting a correction? I'm not asking you for a kidney, I'm asking you to correct an example that presents code people shouldn't write anyway. You don't want to update the POD? Fine, but why would you be abusive towards someone who just wants to improve Perl? To make it a more understandable, correct language?

Since when are ambiguous syntax correction suggestions taboo? Soon I'll advise someone to use lexical variables and I'll get my head bitten off.

Sure, I'm a bit overreacting. I still think how we treat each other is important, and this is one way we can improve.

by Sawyer X at February 17, 2011 11:56 UTC

February 16, 2011

v
^
x

Justin MasonLinks for 2011-02-16

by dailylinks at February 16, 2011 18:05 UTC

v
^
x

CPAN TestersCPAN Testers' CPAN Author FAQ

David Golden recently posted regarding a comment from Leon Timmermans on IRC. Leon highlighted a problem when CPAN authors try to find information about CPAN Testers, and how they can request testers to do (or not do) something with a distribution they've just uploaded.

The page they are looking for is the CPAN Author FAQ on the CPAN Testers Wiki. Although there is plenty of information for authors, the page doesn't appear prominently on search engines when some searches for that kind of information.

As such, David has suggested that people tweet or post about the page, which includes this post ;) In addition, I'm going to look at adding this and potentially other useful pages as quick links on other CPAN Testers family sites. If there are specific pages you think should be mentioned, please let me know and we'll look at how best we can raise their profile too.

Cross-posted from the CPAN Testers Blog

by CPAN Testers at February 16, 2011 09:21 UTC

February 15, 2011

v
^
x

David GoldenHow to find the CPAN Testers Authors FAQ

If you are a CPAN author and have ever been stumped how to get CPAN Testers to do something (or stop doing something) when testing your distribution, you should read the CPAN Author FAQ on the CPAN Testers wiki.

leont pointed out on IRC that the CPAN Author FAQ is very hard to find and doesn't currently show up in search results, so if you would like to help out, please blog or twitter about it or something and help point some links in the right direction. :-)

Thanks!

by dagolden at February 15, 2011 20:59 UTC

v
^
x

Rafael Garcia SuarezDropbox config change from the CLI

I use Dropbox on my MacBook. It's neat. However for some reason it's not really autodetecting my proxy, which completely set up via a master proxy.pac file.

I have already a shell script that takes care of adjusting my SSH configuration and my custom proxy.pac depending on where I am. So I just extended it to change Dropbox's configuration and restart it. Here's the gist of it:

by Rafael (noreply@blogger.com) at February 15, 2011 11:18 UTC

v
^
x

David GoldenCPAN.pm release candidate

I recently uploaded CPAN version 1.94_65, which is the 15th development release since 1.9402 and represents almost 18 months of development work. Barring any show-stoppers, a stable release is expected in the next month. Meanwhile, 1.94_65 will be merged into the next Perl core development release and the stable version will be part of Perl 5.14 this spring.

Please give it a try! From the command line:

$ cpan DAGOLDEN/CPAN-1.94_65.tar.gz

Thank you to everyone contributing patches or commits (according to the git log): Andreas Koenig, David Golden, Frank Wiegand, Nick Patch, Robert Bohne, Tomas Doran, brian d foy and burak. Thank you as well to anyone else who contributed to RT tickets or sent in patches outside git.

Here is a summary of major changes and bug fixes since 1.9402.

New features

  • Major simplification of the FirstTime experience for new users, including auto-pick of CPAN mirrors
  • Added support for bootstrapping local::lib when the user does not have write access to perl's site library directories
  • Added support for and prerequisite on HTTP::Tiny for pure-perl HTTP bootstrapping
  • Added support for META/MYMETA.json files if CPAN::Meta is installed
  • Quieter user interface: made lots of '$module missing' type warnings only warn once; eliminated 'no YAML' warnings for distroprefs if there are no distroprefs.
  • Allows Foo/Bar.pm on the commandline to mean Foo::Bar
  • Allows calling make/test/install with regexp if unambiguous
  • bzip2 support should now be on par with gzip

New configuration options

  • added 'atexit' option for scan_cache
  • new config option prefer_external_tar (RT#64037)
  • new config variable version_timeout used in CPAN::Module::parse_version()

RT Tickets closed

  • RT #63357: use Dumpvalue when dumping potential crap
  • RT #62986: original config directories will be found even if File::HomeDir is later installed
  • RT #62064: build_requires_install_policy set to "no" did not work correctly
  • RT #61607: make the FTP download code more robust
  • RT #59216: make sure $builddir exists before calling tempdir
  • RT #57482 and RT #57788 revealed that configure_requires implicitly assumed build_requires instead of normal requires.
  • RT #55093: no_proxy doesn't work with more then one entries
  • RT #55091: don't ask the proxy credentials if proxy_user empty
  • RT #53305: amended lib/App/Cpan.pm because of a regression bugfix: Non-English locales got no diagnostics on a failed locking due to permissions
  • RT #51018: do not switch to default sites when we have a user-configured urllist
  • RT #48803: avoid 'unreached' if not following configure_requires bugfix: treat modules correctly that are deprecated in perl 5.12. improved support for Perl core module deprecation
  • RT #47774: allow duplicate mention of modules in Makefile prelude
  • Fixed rt.perl.org#72362: CPAN ignoring configure_requires. Also fixed (MY)META.yml processing to always prefer Parse::CPAN::Meta, if available.
  • Fixed rt.perl.org#72348: missing CPAN::HandleConfig::output;

Other bug fixes

  • Adds HOMEDRIVE/HOMEPATH or USERPROFILE as home directory options on Windows
  • Fixed several recent regressions related to external transport tools (ncftp, lynx, curl, etc)
  • Fixed quoting for downloading into directories containing whitespace
  • Solaris tar gets more handholding to avoid solaris tar errors
  • Portability fix: By-pass alarm() calls if we're running under perl 5.6.x && $OS is Windows.
  • Work around win32 URI::file volume bug
  • Prerequisites declared with the string "==" now supported

by dagolden at February 15, 2011 05:06 UTC

February 14, 2011

v
^
x

Sebastian RiedelMojolicious 1.1: Awesome features on a caturday

Hearteyedcat

I'm very happy to announce the release of Mojolicious 1.1 (Smiling Cat Face With Heart-Shaped Eyes).
The last few weeks have been really busy and we've got some exciting new features to show you.

Routing shortcuts
With the addition of routing shortcuts we are going to take the whole concept to the next level.
You can add your very own keywords to the router and even make them reusable through plugins.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# Simple "resource" shortcut
$r->add_shortcut(resource => sub {
  my ($r, $name) = @_;

  # Generate "/$name" route
  my $resource = $r->route("/$name")->to("$name#");

  # Handle POST requests
  $resource->post->to('#create')->name("create_$name");

  # Handle GET requests
  $resource->get->to('#show')->name("show_$name");

  return $resource;
});

# POST /user -> {controller => 'user', action => 'create'}
# GET /user -> {controller => 'user', action => 'show'}
$r->resource('user');

CSS3 selectors on the command line
Don't you hate checking huge HTML files from the command line?
Thanks to the addition of CSS3 selectors to the "mojo get" command this is going to change now.

1
2
% mojo get http://mojolicio.us 'head > title'
<title>Mojolicious Web Framework - Join the Perl revolution!</title>

Just select the parts you're actually interested in.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
% mojo get http://mojolicio.us 'a[href]' attr href
http://latest.mojolicio.us
http://mojolicio.us
http://mojolicio.us/perldoc
https://github.com/kraih/mojo/wiki
https://github.com/kraih/mojo
http://search.cpan.org/dist/Mojolicious
http://groups.google.com/group/mojolicious
http://blog.kraih.com
http://twitter.com/kraih
http://search.cpan.org/perldoc?CGI
perldoc?Mojolicious
perldoc?Mojolicious/Lite
http://plackperl.org
http://catalystframework.org
http://mojolicio.us/perldoc
http://mojolicio.us

And test your applications more effectively.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
% mojo generate lite_app
  [exist] /Users/sri
  [write] /Users/sri/myapp.pl
  [chmod] myapp.pl 744

% ./myapp.pl get --verbose --mode testing /welcome 'head > title' text
GET /welcome HTTP/1.1
User-Agent: Mojolicious (Perl)
Content-Length: 0
Host: localhost:13359

HTTP/1.1 200 OK
X-Powered-By: Mojolicious (Perl)
Content-Type: text/html;charset=UTF-8
Connection: Keep-Alive
Date: Mon, 14 Feb 2011 03:49:42 GMT
Server: Mojolicious (Perl)
Content-Length: 108

Welcome

Automatically generated route names
All routes now have automatically generated names based on the route pattern.

1
2
3
4
5
6
7
8
% mojo generate lite_app
  [exist] /Users/sri
  [write] /Users/sri/myapp.pl
  [chmod] myapp.pl 744

% ./myapp.pl routes
/perldoc perldoc (?-xism:^/perldoc)
/welcome welcome (?-xism:^/welcome)

And don't worry about conflicts, custom names have a higher priority than generated ones.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#!/usr/bin/env perl

use Mojolicious::Lite;

get '/welcome';

app->start;
__DATA__

@@ welcome.html.ep
<!doctype html><html>
  <head><title>Welcome!</title></head>
  <body>Welcome to Mojolicious!</body>
</html>

Mode specific exception and not found templates
To make testing and deployment easier you can now just add your own mode specific exception and not found templates.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#!/usr/bin/env perl

use Mojolicious::Lite;

get '/welcome' => sub {
  my $self = shift;
  $self->render(text => 'Hi there!');
};

app->start;
__DATA__

@@ not_found.production.html.ep
<!doctype html><html>
  <head><title>Dude!</title></head>
  <body>Where is my page?</body>
</html>

After all we really don't want to accidentally replace the awesome development mode templates. ;)

1
2
3
4
5
6
% ./myapp.pl get --mode testing / html all
Not Found
    Page not found, want to go home?

% ./myapp.pl get --mode production / html all
Dude!Where is my page?

Reusable router
We've also made the whole router reusable outside of the Mojolicious framework.
This should help a lot with optimization and testing in the future, as well as allow other Perl web frameworks to reuse more of our infrastructure.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#!/usr/bin/env perl

use Mojolicious::Routes;
use Mojolicious::Routes::Match;

# Create some routes
my $r = Mojolicious::Routes->new;
$r->get('/:action')->to(controller => 'foo');

# Match method and path against routes
my $m = Mojolicious::Routes::Match->new(GET => '/bar')->match($r);

# Results
print $m->captures->{controller}, "\n";
print $m->captures->{action}, "\n";

# Generate path from route
print $m->path_for(action => 'baz'), "\n";

And as usual there is a lot more to discover, see Changes on GitHub for the full list of improvements.

Have fun!

Permalink | Leave a comment  »

February 14, 2011 07:57 UTC

v
^
x

Curtis JewellProgress so far...

5.12.3 build is being painful, but there is progress. I got a quick response from the maintainers of the Math::Big* modules on one issue, but there's another I haven't reported to them yet - nor do I know whether it's their issue or mine! Math::BigInt::GMP is ABENDing during its tests. This could be caused by the build of libgmp I'm using, (which was built from a mercurial checkout from last year) so I'm going to put fixing that off until after Beta 1, and just skip it for now... problem being that it's near the bottom of a dependency tree, so all those modules will have to be yanked for Beta 1.

But at least a build of Strawberry Perl 5.12.3 will be available to investigate with!

February 14, 2011 01:49 UTC

February 13, 2011

v
^
x

Curtis JewellProgress...

Built 5.10.1 successfully again (hit a bug that made me surprised it ever built before) and 5.12.3 built through perl, at least, before it broke. (ran into the fact that I forgot to specify where pari for 5.12.3 lives.) I fixed that, and I'm doing a completely non-forced build of 5.12.3 to make sure everything works fine, then I'm going to chase after the missing-README bug tomorrow.

Oh, and I have a code-signing certificate, so I can sign the .msi's now. No more off-putting yellow message about installing unsigned software. Instead, there'll be a blue message in the same place - if I remember to go and sign before uploading. (my signature provider only has a 'new-style' signing server, so I've got to copy the files up to my Windows 7 machine before I sign and upload them.)

So, 5.10.1.5 Beta 1 and 5.12.3.0 Beta 1 should be up within the next few days, I hope.

February 13, 2011 03:20 UTC

February 11, 2011

v
^
x

Curtis JewellI'm getting back in the swing of things...

I've got my 'smoker' machine set up for Strawberry 32-bit finally, and it's successfully built a 5.12.2 build. The script to set up a build environment from scratch, and then use it to do a build, is at http://hg.curtisjewell.name/strawberry-smoker - I'll expand it to do QA testing, and to send to an e-mail list, later.

Yes, there was no README file when the build was finished. That's likely the reason there were problems with the .msi installations crashing - it expects to change file locations in the README file. I thought upgrading would cause the crashes, but instead, it looks like upgrading would have HIDDEN the situation that caused them.

I'm going to try a 5.10.1 build today and see how it works.

Hopefully I'll be able to get a beta of 5.12.3 built this weekend, and start releasing code.

February 11, 2011 07:33 UTC

February 10, 2011

v
^
x

Justin MasonLinks for 2011-02-10

  • Gerrit, Git and Jenkins : This is the future of code review. Commit directly from your git checkout to the Gerrit code-review system; change is immediately web-visible and enters the review workflow; at the same time, Jenkins checks out the proposed change and runs the test suite; once it’s approved, it automatically gets checked in. Brilliant!
    (tags: git coding code-review workflows jenkins gerrit c-i testing automation)

by dailylinks at February 10, 2011 18:05 UTC

v
^
x

Curtis PoeMy daughter and other stuff

On February 5th, my wife and I celebrated the birth of our lovely daughter, Lilly-Rose.

Needless to say, this has impacted my posting here :)

At one point before the birth when we were both rather bored, I was trying to get some work done, but I had some code which I could not load because I lacked an Internet connection. Not all modules were present on my system, config files were missing, etc. However, I desperately needed to unit test my code and I quickly got fed up with my standard bag of tricks for forcing modules to load. Thus, I wrote a module to handle that bag of tricks for me. Later, after releasing it to github, I thought about dedicating it to my wife and newborn daughter, but I didn't think they'd like the Package Butcher dedicated to them.

Still, it's handy code and it works like this:

my $butcher = Package::Butcher->new(
    {
        package     => 'Dummy',
        do_not_load => [qw/Cannot::Load Cannot::Load2 NoSuch::List::MoreUtils/],
        predeclare  => 'uniq',
        subs => {
            this     => sub { 7 },
            that     => sub { 3 },
            existing => sub { 'replaced existing' },
        },
        method_chains => [
            [
                'Cannot::Load' => qw/foo bar baz this that/ => sub {
                    my $args = join ', ' => @_;
                    return "end chain: $args";
                },
            ],
        ],
    }
);
$butcher->use(@optional_import_list);

In other words, many of the common issues which would prevent a package from loading are dealt with here. You can predeclare subs (with prototypes), prevent naughty packages from loading, handle awful method chains embedded in the code and inject your own code.

It needs a fair bit of TLC (kind of like my daughter), but it helped me to test some code which was otherwise not testable (and yes, the code was broken. Yay for testing!)

by Ovid at February 10, 2011 06:35 UTC

February 09, 2011

v
^
x

Justin MasonLinks for 2011-02-09

by dailylinks at February 09, 2011 18:05 UTC

February 08, 2011

v
^
x

Sawyer XTemplate::Toolkit META variables and SETs in Dancer

There is one very advanced feature in Tempate::Toolkit called META variables. META variables are variables that you define in a processed template, that are later available to the WRAPPER template. That means that you can set, for example, the title of the page in the main layout from the inner content template. That's also what it's usually useful for.

However, since Dancer provides its own "layout" option, it basically separates these two processes (rendering a WRAPPER and rendering an inner template), making Template::Toolkit unable to simply define a WRAPPER. So... how does one get it to work?

Well, it's possible to kindly ask Dancer to step aside for a bit, and give you more control over the templating, which means you can do some more advanced stuff, like using SET to set variables in the WRAPPER, or using META variables. Here's how easy it is:

In your config.yml file, you need to:

  1. Disable your "layout" configuration (either comment or remove it)
  2. Make sure you're using Template::Toolkit:
  3. template: "template_toolkit"
  4. Add the following configuration to enable the main.tt as a WRAPPER:
  5. engines: template_toolkit: WRAPPER: layout/main.tt

There! You now have full and complete control over the template. Dancer's template() DSL will still work, but it will no longer separate the layout processing to a different process, so you have a lot of extra control over it.

Checkout for the PEG (Perl Ecosystem Website) configuration file, contact template and main wrapper (specifically the title) for a live example.

Enjoy! :)

by Sawyer X at February 08, 2011 22:50 UTC

v
^
x

Sawyer XFOSDEM, second report - the talks!

THIS POST INCLUDES PICTURES!

At the second day of FOSDEM, the Dancer core crew pretty time took over the Perl dev room! We are 4 developers, and we gave 5 talks: SPORE (by Franck Cuny), "Code, release, market" (by Alexis Sukrieh), Curses::Toolkit (by Damien Krotkine), Moose (Sawyer X) and Dancer (Sawyer X). Somehow, all those talks mentioned Dancer, whether it was by the speaker noting the projects he works on, or by using it as an example in the talk (like Alexis did). The amount of noise and buzz we created around Dancer was very positive!

There were a lot of good talks in the Perl dev room (such as DTrace, XML::Compile, Packaging Perl), but I'll try to cover just a few:

Gabor Szabo (szabgab) gave two very good talks, one on Perl 6 and one on Padre, the Perl IDE, ya know. I think they were very well received. Considering the Perl 6 talk was the first talk that day, early morning (I asked Sukria on the way, "when's the last time you had Perl 6 for breakfast?"), a lot of people were there. Perl 6 is not well understood by many people, and the potential (some of which was already reached) is often missed. I think Gabor made it compelling to the audience and they were very tuned and seemed involved.

The Padre talk was a very good example of how humor gets people to listen to you. People care about having a good time, more than learning and people learn better when they have a good time, so try to mix it up! I think it also got me fueled up for my talk. Oh, and I got to meet Zeno Gantner from Padre, very nice guy!

Dams (Damien Krotkine) gave a talk about his Curses::Toolkit, which allows you to write visual CLI applications using Curses in a Gtk-like interface, which is pretty nifty. He wrote a few applications to show it off, one of which was a Twitter client he wrote in a few hours ("it took me long because I had a nasty bug", are you kidding me? :) and he actually got people to drop their jaws when he showed them how he's resizing a terminal and the windows automatically resize and titles automatically scroll, if that's what you want. It's really amazing what this guy has done using CLI only. I guess after I found out he was a Gentoo hacker, things made more sense. :)

I gave the Moose talk, and I think it went rather well. I was very hyperactive and paused only for short breaths. People really enjoyed themselves and overall I think it went very well. Got the room of 80 people to fill up. Pretty good, eh?

Later I gave the Dancer talk, and the crowd went nuts. If 80 people filling up the room, taking all seats, is impressive, image what 100 people sitting and standing everywhere looks like! Pretty much all my jokes landed (which is hard because my jokes are lame), and people had a very good time! I was really thrilled that I felt comfortable enough to give both talks for such a big crowd, successfully, with a lot of confidence. I think the reason is definitely the company I was with. Being with the Dancer team was tremendous to my self esteem and made me feel very open and relaxed.

Next talk was Franck Cuny's SPORE talk. This guy designed (and wrote!) a framework for creating a client to any REST API with a simple configuration, using the astounding power of Moose's meta-class. He started with the history of SPORE, how he came up with it, and gave examples on how to write your own client using Net::HTTP::Spore. Seriously kickass! I wish people would understand how incredibly useful it is. No need to write your own code anymore. I'll try it out for MetaCPAN::API (don't worry, a post is still pending).

Last, but definitely not least, was a talk by Alexis Sukrieh (sukria) about how to write and maintain your software, giving lessons he learned from Dancer and other projects. He started by comparing "code" and "software": software is a hell of a lot more than code. Coding is the easy part. He goes on to explain what tools you have, how to treat your users, how to market your software, and giving a lot of interesting advice. I learned quite a lot from Sukria over the time while working with him on Dancer, and this talk was an amalgamation of these lessons.

I've checked twitter for some comments on the talks. I don't have a twitter, but I just went to the site, and gone over some history, so don't consider this as an exhaustive list of all tweets relating to the talks. I do believe there are more that I missed:

  • "Attending the first of the modern perl squad talks : SawyerX and #moose ! #fosdem #perl"
  • "#dancer and #spore follow in afternoon :) #fosdem #perl #modernperlsquad"
  • "Config management devroom overflows; Perl/Moose, then. #FOSDEM"
  • "in less than 20 minutes the #modernperlsquad is going to invade the perl devroom! Get ready! #FOSDEM"
  • "Very few people in the perl dev room use catalyst. Interesting. #fosdem"
  • "The best feature of #perl #dancer is @sawyerX =)"
  • "Dancer talk in the perl room has started very well. Very lively speaker."
  • "Less code, more dance #perl #dancer http://bit.ly/hmifHm"
  • "Moose talk now in the perl devroom, by @PerlDancer's core-developer SawyerX. Go Sawyer Go!"
  • "Best speakers so far: Andy Wingo and the #perldancer guy. #fosdem"
  • "@franckcuny presenting SPORE at #fosdem: how lazyness makes better software"
  • "Go @franckcuny ! Go ! #spore #fosdem #modernperlsquad @linkfluence"
  • "Learning about Moose #fosdem"
  • "Interesting talk about Dancer at #fosdem #perl ... early tests failed; time to rtfm"
  • #modernperlsquad on stage ! (moose)
  • 19 minutes later, I can now test my script as if the whole shebang was there. <3 you, #Dancer.

You should checkout Sukria's review over here!

There is more, but I'll leave that to upcoming posts.
See you soon!

dancer-hack-session.jpg

sawyer_fosdem_moose1.jpg

sawyer_fosdem_moose2.jpg

by Sawyer X at February 08, 2011 09:37 UTC

February 07, 2011

v
^
x

David GoldenFive Test::More features you might not be using yet

I've been using Test::More for so long that I sometimes forget about new features that have been added in the last couple years. If you're like me and would like a refresher, here's a list of five useful features that you might want to start using. Unless otherwise noted, you will need at least version 0.88 of Test::More.

1. done_testing instead of no_plan

If you don't know how many tests you are going to run (or don't want to keep count yourself), you used to have to specify 'no_plan' at the start of your tests. That can lead to surprises if your tests exit prematurely. Instead, put the done_testing function at the end of your tests. This ensures that all tests actually run.

use strict; use warnings;
use Test::More 0.88;

ok(1, "first test");
ok(1, "second test");

done_testing;

2. new_ok for object creation

You used to have to create an object and then call isa_ok on it. Now those two can be combined with new_ok. It will also let you pass arguments in an arrayref to be used in the call to new.

use strict; use warnings;
use Test::More 0.88;

require Foo;
my $obj = new_ok("Foo");
# ... use $obj in testing ...

done_testing();

Changed "require_ok" to "require" per Ovid's comment, below.

3. Add diagnostics only in verbose testing

The old diag function always prints to stderr. Particularly for debugging notes, that can clutter up the output when run under a harness. You can now use the note() function to add diagnostics that are only seen in verbose output.

use strict; use warnings;
use Test::More 0.88;

note("Testing on perl $]");
ok(1, "first test");

done_testing();

4. Explain data structures in diagnostics

I often find myself wanting to dump a data structure in diagnostics, and wind up loading Data::Dumper to do that. Now Test::More can do that for you with the explain() function. The output is a string that you can pass to diag or note.

use strict; use warnings;
use Test::More 0.88;

my $want = { pi => 3.14, e => 2.72, i => -1 };
my $have = get_data();

is_deeply($have, $want) or diag explain $have;

done_testing();

5. Encapsulate related tests in a subtest (0.96)

use strict; use warnings;
use Test::More 0.96;

pass("First test");

subtest 'An example subtest' => sub {
  pass("This is a subtest");
  pass("So is this");
};

pass("Third test");

done_testing();

Subtests can have their own plan, but if they don't have one, Test::More acts like there was an implicit done_testing at the end of the code reference. That means you don't have to keep count of tests in a subtest and things still work safely.

You can use a 'skip_all' plan in a subtest, too, which is a useful way of constructing a SKIP block without having to count how many tests are being skipped the way you would with the skip() function.

use strict; use warnings;
use Test::More 0.96;

pass("First test");

subtest 'Like a SKIP block' => sub {
  plan 'skip_all' unless $required_condition;
  pass("This is a subtest");
  # ... many more tests that you don't have to count ...
};

pass("Third test");

done_testing();

by dagolden at February 07, 2011 21:10 UTC

v
^
x

Justin MasonLinks for 2011-02-07

by dailylinks at February 07, 2011 18:05 UTC

v
^
x

CPAN TestersCPAN Testers Summary - January 2011 - Wish You Were Here

Over the past few of months, various fixes and improvements have been made to the Builder process, which builds the pages and support files on the CPAN Testers Reports website. As mentioned in the last summary, this has made a noticeable improvement in the performance of the server. However, there were further fixes and enhancements planned.

Several updates ensued during January, not just for the Builder, but across several parts of the eco-system, both to speed up processing and to reduce the amount of file and database access. Previously several processes recreated their current view by reading the full result set from the database. As we now have over 10 million reports, this can take more time than is reasonable. With the use of the JSON files to record a snapshot, we can now start from a known point, thus meaning we only need to scan a few thousand records. This method has been utlised in other processes, even with smaller database tables and datasets, and the performance improvements have been significant.

For the past couple of weeks we have now been able to update pages within 36 hours of a report being submitted, and at times we have even been less than an hour behind. Looking at the graphs the Builder is now consistently processing more pages than reports. Even though January is typically a quieter month for us, it still produced 333,157 reports. We shall see whether the increased submissions in the coming months make a difference in build times.

You may already know that the 2011 QA Hackathon is happening in Amsterdam this year, but it warrants promotion. While the focus is typically on traditional aspects of QA and testing, hopefully there will be some CPAN Testers projects featured. The hackathon takes place from Saturday 16th April to Monday 18th April, at the offices of principal sponsors, Booking.com. If you're interested in attending, please add your name to the Attendees list, and also add what Projects you want to work on.

Rounding off this summary, an update of the tester mappings reveals we have gained at least 27 new testers, with a total of 38 new mappings. It's interesting to note that many of the new testers are not CPAN Authors, which is great. One of the benefits to CPAN Testers is that you can help to contribute to the project without having to be a hardcore Perl dev, and can contribute as little or as much as you are able. We are fast approaching 11 million test reports and I'm pleased to see we are continually encouraging new people to get involve and keep the submissions rising. Long may it continue.

Cross-posted from the CPAN Testers Blog

by CPAN Testers at February 07, 2011 13:25 UTC

February 06, 2011

v
^
x

Justin MasonIrish Times “Most Read” Article Feed

If you visit the Irish Times at all frequently, you’ll probably have noticed a nifty “wisdom of crowds” feature in the right sidebar: the list of “most read” articles. It’s quite good, since they’re often very interesting articles. Unfortunately, there’s no RSS feed for this feature.

Well, now there is:

by Justin at February 06, 2011 23:07 UTC

v
^
x

Sawyer XBringing Mojolicious to the dancefloor

Dancer's engines is really cool. You wanna know how cool? Here's an example.

If you like Mojolicious' templating system and you want to use it with Dancer, our interchangeable templating engines allow you to use a template engine of Mojolicious, if it exists.

And if it doesn't exist, you can write it. Oh wait, someone already did!

You can find it here!

Check out the source to find out how silly easy this is.

So, if there's another template engine you want with Dancer, try to write it! If you have issues, talk to us on IRC (#dancer on irc.perl.org) and check out other template engines.

by Sawyer X at February 06, 2011 12:11 UTC

v
^
x

Sawyer XDancer FOSDEM fuel, first report

Friday and Saturday have been very productive days for Dancer. We wanted to write up this blog post yesterday night but we were waaaaay too tired for that. Instead, you get it this morning while we're sitting at a great Perl 6 talk by Gabor Szabo.

Friday we met up. I tried to wait for Franck at the train station and got lost... several times. He found me in the end and we went to our hotel room with his co-workers. Sukria and Dams arrived later. It was very exciting to meet the guys I've been working closely with for a while and haven't even met in person yet. Free software sure is nuts! :)

We couldn't fix the internet at the hotel (one cable, no wireless) so we spent a lot of time on discussions about important things. While we do not like bureaucracy, some things had to be sorted out and talked about. Here is a short list of things we've settled:

* We decided on a pull request processing policy, which allows programmers more power, more independence and allows for much quicker (yet safe!) way to process our pull requests - whether to approve to kindly oppose. When a user presents you with a pull request, it's a gift and we don't want to have users wait for these pull requests for more than a day or two. This actually took the majority of the discussion and we made some very good decisions about that.

* We have a new core developer: Damien (dams) Krotkine! Dams has been working with us on Dancer for a while now and provided pull requests, help, documentation, features, fix ups, and we really wanted to provide him with more control over Dancer. In fact, he was practically a core developer for a while now, but he now finally got an official commit bit. Congratulations, Dams!

Saturday (yesterday) was an even more successful day. We started with a hack session of a few hours. We closed 6 pull requests (all except 1) and all outstanding bugs!! We also had a triage of the issues list and classified them. Some of those were classified as new bugs, while others were classified differently (such as "Change required" which means it's not broken, but there's a recommended behavior change needed). We replied on almost all issues so we know what to fix and when, and raised discussion and changes in other places. We also released a few new versions (from development branch and stable branch) of Dancer that also include a new format of a the changes log, as discussed in previous posts. You can see it here. I think you'll find it quite nifty! :)

Dams has written two plugins (while working on an example application for his talk, it's a command line Curses twitter client, hell yeah!): Dancer::Plugin::FlashMessage and Dancer::Plugin::Captcha::reCaptcha. While the first one existed, he was able to finalize the implementation to match the spec that was previous defined by other frameworks (such as RoR) for this feature. From this point on we'll be able to add nifty features that are missing in others. Flavio Poletti is on the front for new ideas for the flash message feature. Good job! The Captcha plugin is pretty self explanatory. It's funny since it's something Gabor suggested on the plane, not regarding Dancer necessarily. Hey, it's already done and you'll see it on CPAN soon. The myth lives on!

As a very small note, I was able to release a first development release of MetaCPAN::API, which I'll talk more about in another post.

There were further discussions about our release policy. PAUSE has a limitation of not allowing authors to upload older versions of already-existing distributions, so we can no longer support new 1.2xxx releases. That means that we'll have to think of versions in a different way. There was a long discussion, that reached a late hour (while we hacked, actually) and in the end we decided on a release policy which we will later release to the mailing list and in the docs, and perhaps in a blog post for others to learn from (hopefully).

That's it for now. Expect another report soon!

by Sawyer X at February 06, 2011 08:54 UTC

February 05, 2011

v
^
x

Sawyer XChanging the changelog

Dave Rolsky has written a compelling post on how not to write a changes log. It's ironic (or is it? the meaning of "irony" is illusive) that while I have much criticism for the changes log of others (and have commented on them to people in the past), Dancer's changes log is not up to par with what I think it should be, nor what Dave thinks it should be (which is close to what I think).

Understanding Dancer's changes log

Dancer's changes log has two primary goals: mark down changes for Dancer users, and giving credit to the people who do it.

This means that in each version we have who did what change, and additional credit for whoever helped in any way.

While the current format of the changes log is not optimal, we cared more about maintaining the current state, and did not optimize it. Truth be told, it was a bad habit and none of us advocate the current style be remained.

A new path

So, agreeing with Dave about the correct format of a changes log, we will be writing the next log entries in a correct form, including dates of releases and an order of what's more crucial or important.

There is a sentence I've learned with time which serves me well, and I do suggest others learn from it as well: "It is only a mistake if you don't learn anything". Once you learn, it can become a lesson. Hence, we welcome any criticism (note "criticism", not "trashing" - which sometimes people confuse with legitimate criticism), in any form (blog post, pull request, commit comment, IRC rant, etc.) by anyone.

Dave, thanks! :)

by Sawyer X at February 05, 2011 10:39 UTC

February 04, 2011

v
^
x

Justin MasonLinks for 2011-02-04

by dailylinks at February 04, 2011 18:05 UTC

v
^
x

Sawyer XDancer FOSDEM mini-hackathon

I am honored to be sponsored by PEG, and I would like to thank them for it.

A team of Dancer core developers (Alexis Sukrieh, Franck Cuny, Damien Krotkine and myself) will be having a mini-hackathon this FOSDEM. This is made possible since we will all be staying together in the same apartment for the duration of the event.

We will focus our efforts on merging Github Pull Requests and closing as many tickets as possible. New features might be worked on, but it is not part of the official plan. We leave room for improvisation. :)

I want to thank everyone who pushed commits and changes into Dancer. I've been very surprised (yet thrilled) at some of the new faces we've been seeing on our IRC channel (#dancer on irc.perl.net) and in our pull requests. While some of these were merged on the spot, others were waiting longer in the queue. This is what we will try to focus on.

The next post will include another issue which will be worked on during the hackathon, which is the Dancer changelog.

If you're arriving at FOSDEM, we have a Perl room and a Perl booth. Feel free to stop by, say hi, catch a few interesting talks and jibber-jabber with us!

by Sawyer X at February 04, 2011 09:59 UTC

February 03, 2011

v
^
x

Justin MasonLinks for 2011-02-03

by dailylinks at February 03, 2011 18:05 UTC

February 02, 2011

v
^
x

perl.comPerl QA Hackathon 2011: Call to Attention

Lars Dɪᴇᴄᴋᴏᴡ has sent out a call for attention for the 2011 Perl QA Hackathon:

The Perl QA hackathon 2011 is taking place from Saturday, April 16th to Monday, April 18th 2011 in Amsterdam, The Netherlands. Attendance is gratis. We would like to know if you are interested in coming and participating. You can also propose other people who should be invited. As with the hackathons in the past years, we aim to fund the travel and accommodation costs for those who cannot get funding otherwise.

We would like to hear about your topics and ideas. Please find further information at the Perl QA Hackathon 2011 Wiki.

by chromatic at February 02, 2011 19:41 UTC

v
^
x

Justin MasonLinks for 2011-02-02

by dailylinks at February 02, 2011 18:05 UTC

February 01, 2011

v
^
x

Sebastian RiedelMojolicious and Plack

While Mojolicious contains a really nice built in web server, which makes especially development and testing very enjoyable, we also have first class support for PSGI and Plack.

1
2
3
4
5
6
% mojo generate lite_app
  [exist] /Users/sri
  [write] /Users/sri/myapp.pl
  [chmod] myapp.pl 744
% plackup myapp.pl
HTTP::Server::PSGI: Accepting connections at http://0:5000/

In fact, it easily beats most web frameworks that were specifically designed for Plack from the get-go.
All Mojolicious applications can automatically detect that they are executed in a PSGI context and act accordingly.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#!/usr/bin/env perl

use Mojolicious::Lite;
use Plack::Builder;

get '/welcome' => sub {
    my $self = shift;
    $self->render(text => 'Hello Mojo!');
};

builder {
    enable 'Deflater';
    app->start;
};

Isn't it pretty?
Everything, including middleware, just works out of the box! :)

Plack

Permalink | Leave a comment  »

February 01, 2011 20:21 UTC

v
^
x

Justin MasonLinks for 2011-02-01

by dailylinks at February 01, 2011 18:05 UTC

v
^
x

David GoldenBelated Modern Perl review

I purchased Modern Perl as soon as it came out and I read it right away. I meant to write it up a long time ago. Here's a copy of the review I posted to Amazon:

I had a hard time characterizing this wonderful book. It explains the fundamentals, but it's not an introductory book like Learning Perl. It covers almost every feature of the Perl 5 language, but it's not a reference book like Programming Perl. It explains common idioms, but it's not a guide to Perl 5 fluency like Effective Perl Programming. It contains many practical suggestions, but it's not a book of tips like Perl Hacks.

I can only describe it as a "textbook". If I had to pick a single book to teach Perl 5, this is the one I'd choose. As I read it, I was reminded of the first time I read K&R (C Programming Language) and how much learning was packed into it. (It's the only college programming text I still have). In a slim 250 pages, Modern Perl obsoletes most of my shelf of Perl 5 books. It's not intended for a complete novice to programming (any more than K&R was), but in the hands of a competent programmer or a diligent student it will teach everything that one needs to know to write Perl 5 well.

What I especially like about Modern Perl is that it puts particular emphasis on understanding fundamental Perl 5 concepts like "context" and "scope". From these and other foundations, one can understand why certain programming idioms have emerged and one can avoid surprises in the odder corners of the language. If you want a book to spoon feed cut-and-paste code to you, this is not the book for you. If you want a book that will teach you to write your own code confidently, this is an excellent resource.

If you already know some programming and want to learn Perl 5, then Modern Perl is the book you should get. If you already know Perl 5, but don't think you know it well, or if you haven't kept up in developments in Perl 5 since the late 1990's, then Modern Perl will get you up to speed.

by dagolden at February 01, 2011 02:24 UTC

January 31, 2011

v
^
x

brian d foyOne more week for OSCON proposals

The OSCON Call for Proposals ends February 7th.

In the Perl track, we're looking for:

  • Perly stuff that deals with Javascript, JSON, or HTML5
  • Who's going to do a Plack talk?
  • Modern Perl or modern Perl.
  • Cool new things you can do with Perl 5.14 (see the Perl 5.14 posts at The Effective Perler)
  • Using Unicode in Perl
  • The latest Perl development support tools
  • Current Perl good-enough practices
  • What's up with Parrot and Perl 6

If you're not up to a 40 minute talk, especially if you're a new presenter, we also have short, low pressure 5-minute lightning talks.

And, since speaker participation includes a peer-review process, I have a lot of advice on improving your chances.

by brian d foy at January 31, 2011 21:21 UTC

January 30, 2011

v
^
x

brian d foyThe 2011 Perl QA Workshop, April 16-18 in Amsterdam

The 2011 Perl QA Workshop will be April 16-18 in Amsterdam, sponsored by Booking.com. If you show up, you might never leave.

The QA workshop is perhaps one of the most productive workshops. Here are my notes and videos from the 2010 Perl QA workshop in Vienna:

by brian d foy at January 30, 2011 01:36 UTC

January 28, 2011

v
^
x

Ricardo SignesWill your perl remain supported -- and what does that mean?

Yesterday, I tweeted this:

Remember, Perl shops: if you're still on 5.8 come April, you're on an unsupported legacy version. Current versions are 5.10.1 and 5.12.3

A few people asked for more details, and in giving them, I said this:

It's more an amalgam of truths than an actual truth.

Right now, the average company using Perl (or Python, or Ruby, etc.) has no support contract for the language. It's free, open source software that comes with no warranty, guarantee, or promises. Of course, everybody knows that this doesn't mean that it's every man for himself. There are a number of volunteers who put in incredible amounts of work to fix bugs, ensure portability, and improve the language itself. The key word above is volunteer, which means, fundamentally, that nobody is under any obligation to fix anything. If you absolutely need work done, you just might have to pay for it. (In my experience, this is exceedingly rare; the lengths to which I have seen the core Perl team to go to fix bugs that don't even affect them directly are both staggering and humbling.)

Still, the Perl team wants people to have the right kind of expectations, and that comes in two parts: the team will investigate and try to fix bugs in recent perls, but it won't promise to spend its valuable time on old versions. After all, the internals of perl change over time, and it takes a significant mental effort to keep various major versions of perl's VM and its implementation fresh in one's mind.

It was recently my great pleasure to perform the routine work required to release perl-5.12.3, which contained the most up-to-date copy of perlpolicy, which contains the promises that the core team will try to keep, regarding perl support. Here are two of the key points:

  • we will attempt to fix critical issues in the two most recent stable release series
  • we will attempt to provide "critical" security patches or releases for release series begun within the last three years

In April, the release process for 5.14.0 begins, meaning we'll probably have it in April or May, barring strange circumstances. Once 5.14.0 is out, the official support period for 5.10.x will end, and the chances of bugfixes being applied to the maint-5.10 branch will become very slim. The chances of a new 5.10.x release will become tiny. There simply won't be enough interested volunteers to do the work for such an old version. Maybe if there is a big influx of workers who want to support 5.10, the policy will change -- but that seems a pretty unlikely scenario.

Not only will 5.10.x be out of its normal "official support period," but it will be out of its security update period, too. perlhist tells us that 5.10.0 was released in December, 2007 -- already more than three years ago.

So, with spring's 5.14.0 release obsoleting 5.10, why was I talking about 5.8? Well, perl 5.10.1 was released in August, 2009, and I suspect that almost anybody running 5.10 is running 5.10.1 -- it has too many critical bugfixes for me to stomach the idea that there's a lot of 5.10.0 out there. (Let's not burst my bubble, okay? Optimism is all I have.) People who upgraded their perl in 2009 can probably manage to do it again in the next year or two without serious pain. They remember how.

On the other hand, experience on IRC, mailing lists, and the rest of the world has told me that the most common subversions of 5.8 in use are 8, 5, and 4, probably in that order. My gut tells me that 5.8.1 comes next, but I'm a lot less confident. Those releases were in 2006, 2004, 2004, and 2003, respectively. Assuming that these upgrades were done within a year or so of the language release (which isn't a great assumption, but a tolerable one) then there are a lot of places who haven't upgraded their primary programming tool in over five years. Think of all the other technical tasks you last performed five years ago, and you might realize how little you remember how you did it, and what all the little details were that cost you your 80% time overrun.

That's why I try to track current versions whenever possible. It's not that older versions are always a serious liability, it's that it can cost much more to upgrade only very rarely, compared with frequently. It also drives you to have better integration tools, since you integrate frequently and want it to have a low cost. It means that when the next maintenance release comes out, you can easily perform an automated test of your systems, build a new deployment build, and upgrade, all as routine tasks.

So, my advice is this: if you're building your company's software on a doubly-obsolete version of a tool that's still under development, it's time to begin tooling up to stay up to date.

by rjbs at January 28, 2011 16:11 UTC

v
^
x

Curtis PoeShow Perl subname in vim statusline

I asked on the vim mailing list how to see the name of Perl's current sub/method in the status line and Alan Young, the author of PPIx::IndexLines has a great suggestion which unfortunately relied on PPI. I'm working with very large modules and PPI ground to a halt for me. As a result, I took his suggestion and worked out the following.

First, make sure that your .vimrc has set laststatus=2 in it. That will ensure that you always get a status line, even if you only have one window (i.e., don't have split windows). Then drop the following into your .vim/ftplugin/perl.vim:

if ! exists("g:did_perl_statusline")
    setlocal statusline+=%(\ %{StatusLineIndexLine()}%)
    setlocal statusline+=%=
    setlocal statusline+=%f\ 
    setlocal statusline+=%P
    let g:did_perl_statusline = 1
endif

if has( 'perl' )
perl << EOP
    use strict;
    sub current_sub {
        my $curwin = $main::curwin;
        my $curbuf = $main::curbuf;

        my @document = map { $curbuf->Get($_) } 0 .. $curbuf->Count;
        my ( $line_number, $column  ) = $curwin->Cursor;

        my $sub_name = '(not in sub)';
        for my $i ( reverse ( 1 .. $line_number  -1 ) ) {
            my $line = $document[$i];
            if ( $line =~ /^\s*sub\s+(\w+)\b/ ) {
                $sub_name = $1;
                last;
            }
        }
        VIM::DoCommand "let subName='$line_number: $sub_name'";
    }
EOP

function! StatusLineIndexLine()
  perl current_sub()
  return subName
endfunction
endif

All this does is naïvely read backwards from the current line to get "sub $name" and report $name. It will fail on many common cases. However, it's fast. Very fast. Unlike the PPI solution, I can use this and manually correct any files which don't fit this convention.

It's a quick and nasty hack, but already I'm finding it very useful. Suggestions welcome :)

Note that this requires Perl support. Just ":echo has('perl')" and if it displays '1', you're good to go. Then type "help perl-using" to see what's going on.

Update: I've updated that statusline. There's a space after the '\' and it now shows the column, filename and percent. See ":help statusline" or this blog post for more ideas.

Update2: Changed "set" to "setlocal" so we don't screw with non-Perl buffers.

Update3: If I do an ":e $anotherfile", I lose the new status line. Eliminating the exists("g:did_perl_statusline") seems to fix this.

by Ovid at January 28, 2011 12:37 UTC

v
^
x

David GoldenOS-specific prerequisites with Dist::Zilla

One of my long-standing annoyances with Dist::Zilla was that it didn't have means for doing OS-specific prerequisites. I had put off converting some distributions to Dist::Zilla because of that, but finally got off my duff and wrote Dist::Zilla::Plugin::OSPrereqs to do what I want.

It works like this in your dist.ini:

[OSPrereqs / MSWin32]
Win32API::File = 0.10

That puts a conditional clause in the Makefile.PL that only adds the prerequisite for the given operating system. It's a bit of a crude hack, but appears to work.

Now even more things I maintain can be streamlined the Dist::Zilla way! Awesome!

by dagolden at January 28, 2011 11:48 UTC

Perl.org sites : books | dev | jobs | learn | lists | www   
When you need perl, think perl.org  
the camel    
(Last updated: March 21, 2011 02:05 GMT)