Why is my OS X Yosemite install taking so long?: an analysis

Why?
Since the latest Mac OS X update, 10.10 "Yosemite", was released last Thursday, there have been complaints springing up online of the progress bar woefully underestimating the actual time to complete installation. More specifically, it appeared as if, for a certain group of people (myself included), the installer would stall out at "two minutes remaining" or "less than a minute remaining"–sometimes for hours.

In the vast majority of these cases, though, the installation process didn't hang, it was just performing a bunch of unexpected tasks that it couldn't predict.

During the install, striking "Command" + "L" would bring up the install logs. In my case, the logs indicated that the installer was busy right until the very last minute.

Not knowing very much about OS X's installation process and wanting to learn more, and wanting to answer why the installation was taking longer than the progress bar expected, I saved the log to a file on my disk with the intention of analyzing it before the installer automatically restarted my computer.

Cleaning
The log file from the Yosemite installer wasn't in a format that R (or any program) could handle natively so before we can use it, we have to clean/munge it. To do this, we'll write a program in the queen of all text-processing languages: perl.

This script will read the log file, line-by-line from standard input (for easy shell piping), and spit out nicely formatted tab-delimited lines.

#!/usr/bin/perl

use strict;
use warnings;

# read from stdin
while(<>){
    chomp;
    my $line = $_;
    my ($not_message, $message) = split ': ', $line, 2;

    # skip lines with blank messages
    next if $message =~ m/^\s*$/;

    my ($month, $day, $time, $machine, $service) = split " ", $not_message;

    print join("\t", $month, $day, $time, $machine, $service, $message) . "\n";
}

We can output the cleaned log file with this shell command:

echo "Month\tDay\tTime\tMachine\tService\tMessage" > cleaned.log
grep '^Oct' ./YosemiteInstall.log | grep -v ']:  ' | grep -v ': }' |  ./clean-log.pl >> cleaned.log

This cleaned log contains 6 fields: 'Month', 'Day', 'Time', 'Machine (host)', 'Service', and 'Message'. The installation didn't span days (it didn't even span an hour) so technically I didn't need the 'Month' and 'Day' fields, but I left them in for completeness' sake.

Analysis

Let's set some options and load the libraries we are going to use:

# options
options(echo=TRUE)
options(stringsAsFactors=FALSE)

# libraries
library(dplyr)
library(ggplot2)
library(lubridate)
library(reshape2)

Now we read the log file that I cleaned and add a few columns with correctly parsed timestamps using lubridate’s "parse_date_time()" function

yos.log <- read.delim("./cleaned.log", sep="\t") %>%
  mutate(nice.date=paste(Month, Day, "2014", Time)) %>%
  mutate(lub.time=parse_date_time(nice.date, 
                                  "%b %d! %Y! %H!:%M!:%S!", 
                                  tz="EST"))

And remove the rows of dates that didn't parse correctly

yos.log <- yos.log[!is.na(yos.log$lub.time),]

head(yos.log)


##   Month Day     Time   Machine        Service
## 1   Oct  18 11:28:23 localhost opendirectoryd
## 2   Oct  18 11:28:23 localhost opendirectoryd
## 3   Oct  18 11:28:23 localhost opendirectoryd
## 4   Oct  18 11:28:23 localhost opendirectoryd
## 5   Oct  18 11:28:23 localhost opendirectoryd
## 6   Oct  18 11:28:23 localhost opendirectoryd
##                                                                    Message
## 1                   opendirectoryd (build 382.0) launched - installer mode
## 2                                  Logging level limit changed to 'notice'
## 3                                               Initialize trigger support
## 4 created endpoint for mach service 'com.apple.private.opendirectoryd.rpc'
## 5                                set default handler for RPC 'reset_cache'
## 6                           set default handler for RPC 'reset_statistics'
##              nice.date            lub.time
## 1 Oct 18 2014 11:28:23 2014-10-18 11:28:23
## 2 Oct 18 2014 11:28:23 2014-10-18 11:28:23
## 3 Oct 18 2014 11:28:23 2014-10-18 11:28:23
## 4 Oct 18 2014 11:28:23 2014-10-18 11:28:23
## 5 Oct 18 2014 11:28:23 2014-10-18 11:28:23
## 6 Oct 18 2014 11:28:23 2014-10-18 11:28:23

The first question I had was how long the installation process took

install.time <- yos.log[nrow(yos.log), "lub.time"] - yos.log[1, "lub.time"]
(as.duration(install.time))
## [1] "1848s (~30.8 minutes)"

Ok, about a half-hour.

Let's make a column for cumulative time by subtracting each row's time by the start time

yos.log$cumulative <- yos.log$lub.time - min(yos.log$lub.time, na.rm=TRUE)

In order to see what processes were taking the longest, we have to make a column for elapsed time. To do this, we can subtract each row's time from the time of the subsequent row.

yos.log$elapsed <- lead(yos.log$lub.time) - yos.log$lub.time

# remove last row
yos.log <- yos.log[-nrow(yos.log),]

Which services were responsible for the most writes to the log and what services took the longest? We can find out with the following elegant dplyr construct. While we're at it, we should add columns for percentange of the whole for easy plotting.

counts <- yos.log %>%
  group_by(Service) %>%
  summarise(n=n(), totalTime=sum(elapsed)) %>%
  arrange(desc(n)) %>%
  top_n(8, n) %>%
  mutate(percent.n = n/sum(n)) %>%
  mutate(percent.totalTime = as.numeric(totalTime)/sum(as.numeric(totalTime)))
(counts)

## Source: local data frame [8 x 5]
## 
##           Service     n totalTime percent.n percent.totalTime
## 1     OSInstaller 42400 1586 secs 0.9197197          0.867615
## 2  opendirectoryd  3263   43 secs 0.0707794          0.023523
## 3         Unknown   236  157 secs 0.0051192          0.085886
## 4  _mdnsresponder    52   17 secs 0.0011280          0.009300
## 5              OS    49    1 secs 0.0010629          0.000547
## 6 diskmanagementd    47    7 secs 0.0010195          0.003829
## 7     storagekitd    29    2 secs 0.0006291          0.001094
## 8         configd    25   15 secs 0.0005423          0.008206

Ok, the "OSInstaller" is responsible for the vast majority of the writes to the log and to the total time of the installation. "opendirectoryd" was the next most verbose process, but its processes were relatively quick compared to the "Unknown" process' as evidenced by "Unknown" taking almost 4 times longer, in aggregate, in spite of having only 7% of "opendirectoryd"'s log entries.

We can more intuitively view the number-of-entries/time-taken mismatch thusly:

melted <- melt(as.data.frame(counts[,c("Service",
                                       "percent.n",
                                       "percent.totalTime")]))

ggplot(melted, aes(x=Service, y=as.numeric(value), fill=factor(variable))) +
  geom_bar(width=.8, stat="identity", position = "dodge",) +
  ggtitle("Breakdown of services during installation by writes to log") +
  ylab("percent") + xlab("service") +
  scale_fill_discrete(name="Percent of",
                      breaks=c("percent.n", "percent.totalTime"),
                      labels=c("writes to logfile", "time elapsed"))

Breakdown

As you can see, the "Unknown" process took a disproportionately long time for its relatively few log entries; the opposite behavior is observed with "opendirectoryd". The other processes contribute very little to both the number of log entries and the total time in the installation process.

What were the 5 most lengthy processes?

yos.log %>%
  arrange(desc(elapsed)) %>%
  select(Service, Message, elapsed) %>%
  head(n=5)


##       Service
## 1 OSInstaller
## 2 OSInstaller
## 3     Unknown
## 4 OSInstaller
## 5 OSInstaller
##                                                                                                                                            Message
## 1 PackageKit: Extracting file:///System/Installation/Packages/Essentials.pkg (destination=/Volumes/Macintosh HD/.OSInstallSandboxPath/Root, uid=0)
## 2                                    System Reaper: Archiving previous system logs to /Volumes/Macintosh HD/private/var/db/PreviousSystemLogs.cpgz
## 3                       kext file:///Volumes/Macintosh%20HD/System/Library/Extensions/JMicronATA.kext/ is in hash exception list, allowing to load
## 4                                                                   Folder Manager is being asked to create a folder (down) while running as uid 0
## 5                                                                                                                      Checking catalog hierarchy.
##    elapsed
## 1 169 secs
## 2 149 secs
## 3  70 secs
## 4  46 secs
## 5  44 secs

The top processes were:

  • Unpacking and moving the contents of "Essentials.pkg" into what is to become the newsystem directory structure. This ostensibly contains items like all the updated applications (Safari, Mail, etc..). (almost three minutes)
  • Archiving the old system logs (two and a half minutes)
  • Loading the kernel module that allows the onboard serial ATA controller to work (a little over a minute)

Let's view a density plot of the number of writes to the log file during installation.

ggplot(yos.log, aes(x=lub.time)) +
  geom_density(adjust=3, fill="#0072B2") +
  ggtitle("Density plot of number of writes to log file during installation") +
  xlab("time") + ylab("")

density

This graph is very illuminating; the vast majority of log file writes were the result of very quick processes that took place in the last 15 minutes of the install, which is when the progress bar read that only two minutes were remaining.

In particular, there were a very large number of log file writes between 11:47 and 11:48; what was going on here?

# if the first time is in between the second two, this returns TRUE
is.in <- function(time, start, end){
  if(time > start && time < end)
    return(TRUE)
  return(FALSE)
}

the.start <- ymd_hms("14-10-18 11:47:00", tz="EST")
the.end <- ymd_hms("14-10-18 11:48:00", tz="EST")

# logical vector containing indices of writes in time interval
is.in.interval <- sapply(yos.log$lub.time, is.in,
                         the.start,
                         the.end)

# extract only these rows
in.interval <- yos.log[is.in.interval, ]

# what do they look like?
silence <- in.interval %>%
  select(Message) %>%
  sample_n(7) %>%
  apply(1, function (x){cat("\n");cat(x);cat("\n")})

## 
## (NodeOp) Move /Volumes/Macintosh HD/Recovered Items/usr/local/texlive/2013/tlpkg/tlpobj/featpost.tlpobj -> /Volumes/Macintosh HD/usr/local/texlive/2013/tlpkg/tlpobj Final name: featpost.tlpobj (Flags used: kFSFileOperationDefaultOptions,kFSFileOperationSkipSourcePermissionErrors,kFSFileOperationCopyExactPermissions,kFSFileOperationSkipPreflight,k_FSFileOperationSuppressConversionCopy)
## 
## (NodeOp) Move /Volumes/Macintosh HD/Recovered Items/usr/local/texlive/2013/texmf-dist/tex/generic/pst-eucl/pst-eucl.tex -> /Volumes/Macintosh HD/usr/local/texlive/2013/texmf-dist/tex/generic/pst-eucl Final name: pst-eucl.tex (Flags used: kFSFileOperationDefaultOptions,kFSFileOperationSkipSourcePermissionErrors,kFSFileOperationCopyExactPermissions,kFSFileOperationSkipPreflight,k_FSFileOperationSuppressConversionCopy)
## 
## (NodeOp) Move /Volumes/Macintosh HD/Recovered Items/Library/Python/2.7/site-packages/pandas-0.12.0_943_gaef5061-py2.7-macosx-10.9-intel.egg/pandas/tests/test_groupby.py -> /Volumes/Macintosh HD/Library/Python/2.7/site-packages/pandas-0.12.0_943_gaef5061-py2.7-macosx-10.9-intel.egg/pandas/tests Final name: test_groupby.py (Flags used: kFSFileOperationDefaultOptions,kFSFileOperationSkipSourcePermissionErrors,kFSFileOperationCopyExactPermissions,kFSFileOperationSkipPreflight,k_FSFileOperationSuppressConversionCopy)
## 
## (NodeOp) Move /Volumes/Macintosh HD/Recovered Items/usr/local/texlive/2013/texmf-dist/tex/latex/ucthesis/uct10.clo -> /Volumes/Macintosh HD/usr/local/texlive/2013/texmf-dist/tex/latex/ucthesis Final name: uct10.clo (Flags used: kFSFileOperationDefaultOptions,kFSFileOperationSkipSourcePermissionErrors,kFSFileOperationCopyExactPermissions,kFSFileOperationSkipPreflight,k_FSFileOperationSuppressConversionCopy)
## 
## (NodeOp) Move /Volumes/Macintosh HD/Recovered Items/usr/local/texlive/2013/texmf-dist/doc/latex/przechlewski-book/wkmgr1.tex -> /Volumes/Macintosh HD/usr/local/texlive/2013/texmf-dist/doc/latex/przechlewski-book Final name: wkmgr1.tex (Flags used: kFSFileOperationDefaultOptions,kFSFileOperationSkipSourcePermissionErrors,kFSFileOperationCopyExactPermissions,kFSFileOperationSkipPreflight,k_FSFileOperationSuppressConversionCopy)
## 
## WARNING : ensureParentPathExists: Created  `/Volumes/Macintosh HD/usr/local/texlive/2013/texmf-dist/doc/latex/moderntimeline' w/ {
## 
## (NodeOp) Move /Volumes/Macintosh HD/Recovered Items/usr/local/texlive/2013/texmf-dist/fonts/type1/wadalab/mrj/mrjkx.pfb -> /Volumes/Macintosh HD/usr/local/texlive/2013/texmf-dist/fonts/type1/wadalab/mrj Final name: mrjkx.pfb (Flags used: kFSFileOperationDefaultOptions,kFSFileOperationSkipSourcePermissionErrors,kFSFileOperationCopyExactPermissions,kFSFileOperationSkipPreflight,k_FSFileOperationSuppressConversionCopy)

Ah, so these processes are the result of the installer having to move files back into the new installation directory structure. In particular, the vast majority of these move operations are moving files related to a program called "texlive". I'll explain why this is to blame for the inaccurate projected time to completion in the next section.

But lastly, let's view a faceted density plot of the number of log files writes by process. This might give us a sense of what steps go on as the installation progresses by showing us with processes are most active.

# reduce number of service to a select few of the most active
smaller <- yos.log %>%
  filter(Service %in% c("OSInstaller", "opendirectoryd",
                        "Unknown", "OS"))

ggplot(smaller, aes(x=lub.time, color=Service)) +
  geom_density(aes( y = ..scaled..)) +
  ggtitle("Faceted density of log file writes by process (scaled)") +
  xlab("time") + ylab("")

facet

This shows that no one process runs consistently throughout the entire installation process, but rather that the process run in spurts.

the answer
The vast majority of Mac users don't place strange files in certain special system-critical locations like '/usr/local/' and '/Library/'. Among those who do, though, these directories are littered with hundreds and hundreds of custom files that the installer doesn't and can't have prior knowledge of.

In my case, and probably many others, the estimated time-to-completion was inaccurate because the installer couldn't anticipate needing to copy back so many files to certain special directories after unpacking the contents of the new OS. Additionally, for each of these copied files, the installer had to make sure the subdirectories had the exact same meta-data (permissions, owner, reference count, creation date, etc…) as before the installation began. This entire process added many minutes to the procedure at a point when the installer thought it was pretty much done.

What were some of the files that the installer needed to copy back? The answer will be different for each system but, as mentioned above, anything placed '/usr/local' and '/Library' directories that wasn't Apple-supplied needed to be moved and moved back.

/usr/local/
/usr/local/ is used chiefly for user-installed software that isn't part of the OS distribution. In my case, my /usr/local contained a custom compliled Vim; ClamXAV, a lightweight virus scanner that I use only for the benefit of my Windows-using friends; and texlive, software for the TeX typesetting system. texlive was, by far, the biggest time-sink since it had over 123,491 files.

In addition to these programs, many users might find that the Homebrew package manager is to blame for their long installation process, since this software also uses the /usr/local prefix (although it probably should not).

/Library/
Among other things, this directory holds (subdirectories that hold) modules and packages that the Apple-supplied Python, Ruby, and Perl uses. If you use these Apple-supplied versions of these languages and you install your own packages/modules using super-user privileges, the new packages will go into this directory and it will appear foreign to the Yosemite installer.

To get around this issue, either install packages/modules in a local (non-system) library, or use alternate versions of these programming languages that you either download and install yourself, or use MacPorts to install.

---

You can find all the code and logs that I used for this analysis in this git repository

This post is also available as a RMarkdown report here

share this: Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail

Compiling R from source and why you shouldn't do it

I’ve always thought that it’s silly, in most cases, source compiling software that’s already available in binary form. To the end of making more binary packages available to Mac users, I just started contributing to a project that is creating a repository of 64 bit builds of pkgsrc’s (NetBSD's portable package manager) over 12,000 packages. This means having to get my hands dirty compiling packages myself. After contributing Vim, the next logical thing for me is to provide a R build.

Compiling R from source (again and again) has been tremendously enlightening for me. Not only do I feel like I understand a lot more about R’s internals, but I’ve also come to the conclusion that if the CRAN provides a binary build for your system, you should never really compile R yourself. This, most definitely, includes Mac users.

Before I go into how to build it, let’s explore some of the reasons someone might want to build R themselves and why, in most cases, this is unnecessary.

  • I want a faster R.
  • It’s sometimes assumed that if you build something from source yourself, it’s customized to your particular system and, therefore, runs faster. In practice this requires a lot of intervention (and heartache) at the configuration step of the compilation process. In the case of R on OS X, no amount of compiler optimization and configuration (using the stock linear algebra libraries) I’ve attempted was able to outperform R from CRAN. You don’t know R better than the R Core Team, and they know what’s good for you. Just use theirs.

  • I can compile against other linear algebra libraries and get a speedup that way.
  • You don’t need to compile R against these other libraries in order to use them. I’ll go into how you can use them from your current R installation in another post.

  • I’m on a system for which there is no binary available.
  • Yikes! You’re probably used to heartache. You don’t have a choice than to build R yourself. Have a ball!

  • I just want to.
  • As I’ve discovered, it is a great way to learn more about R’s internals. If you fancy yourself an R ‘guru’ and want to build R yourself, I can’t really blame you—so long as you don’t use your likely botched build in a production environment.

  • I’m a gentoo user.
  • I’m so sorry.

  • I’m a Windows user and a masochist.
  • Compiling R is an excellent choice. The safe word is “GNU”.

  • I’m helping to build a repo of 64 bit binaries for pkgsrc or I’m writing a blog post about compiling R.
  • You’re exempt from criticism or ridicule.

If at this point, you’re still interested in compiling R, in spite of my attesting to it being, for most cases, completely unnecessary, please read on. I also strongly recommend that you read the following guide from CRAN.

Dependencies
Users of most GNU/Linux systems can build the dependencies necessary by running:

sudo apt-get build-dep r-base-dev

or the equivalent command for your system.

On OS X you need

  • Xcode and Xcode command-line tools: Xcode is available from the App Store. The command-line tools have to be downloaded separately from the ‘Preferences’ menu.
  • gfortran: or another compliant Fortran compiler. You need this to chiefly compile the linear algebra libraries.
  • Java: You can grab the Java for OS X developers package from the Apple Developers page or grab another JDK. You need this for the JNI headers.
  • XQuartz: This includes the X11 headers and cairo.
  • MacTex: This isn’t strictly necessary but you will need it to generate R’s PDF documentation. If you don’t want to download this over 2 GB package, there are other recourses available. If you want this package, you have to add "/usr/texbin" to your PATH environment variable. Yay, now you have LaTeX!

Other dependencies are unnecessary because the R source ships with fallback versions of them. These include pcre, zlib, xdr, and a few others. Still other dependencies will be present on any POSIX-compliant system.

Configuration and build
After downloading the source here , you have a few decisions to make. The first is where you want to install R. You don’t have to install R anywhere per se because it can be run straight from the build directory, you can just place the R script (which contains the prefix hardcoded) in the bin subdirectory anywhere on your PATH. If you do not specify the prefix, it will default to the build directory.

It’s customary to set your prefix for user compiled software to /usr/local, so that’s what we’ll do here.

The other decisions that have to be made are very platform/system specific. You can see all the configuration options by running

./configure --help

The auto-configuration is very good at setting sane defaults for most of these options. For example, if you’re building on OS X, it will by default build R as a framework and shared library, which you would need if you want to use R.app. This is a separate install.

On OS X, I ran my pre-configuration and configuration thusly:

export CC="clang"
export CXX="clang"
export F77="gfortran-4.2 -arch x86_64"
export FC=$F77
export OBJC="clang"
./configure -prefix=/usr/local

Assuming everything goes well, you can now start building with

make

If it successfully builds, you can install R to the prefix with

make install

Now you have R.

If your on a Mac, you may have noticed that you have a crippled R install. This is for a few reasons.

  • The binary from CRAN comes with R.app. If you want that, you have to build that yourself.
  • You can no longer download binary builds of your favorite R packages. It has to build them from source now.

As an R user on a Mac, you then realize how good you’ve had it. The binary build from CRAN comes with R.app, a fast R framework, and it installs binary R packages by default. Now you no longer have those options.

Additionally, dear Mac-user, you also have the benefit of using RStudio’s new Cocoa interface. Count your lucky stars, install CRAN’s binary build, and read my next post about how to switch out the linear algebra libraries that R uses for a few other faster alternatives.

share this: Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail

qstats - quick and dirty statistics tool for the Unix pipeline

Back when 200MB hard drives were the size of washing machines and programs had no choice but to be as efficient as possible, Unix was born. In a serendipitous twist of fate, the same programs that were borne of this era of 4MB RAM and 16 bit processors are useful to data analysts with 2,000 times the amount of RAM and 64 bit multicore processors, processing data files over several GBs large.

Like all good things, Unix was started at Bell Labs in the late 60s. It has been honed over 40 years and now runs, if not on your computer, on the vast majority of the web servers you visit, a lot of phones, embedded devices you use, and a toaster near you.

Unices?

Since near everything in Unix is a text file, it grew up to be… very good at processing text. This is why Unix tools are a great addition to the data analyst's toolbox. There are a few great posts on how to get started using these tools in your workflow (here, here, here, and here) which you should read. By the way, when I talk about tools here, I’m talking about pipeline-able tools that take raw text input from standard input like sed; awk; and grep, not perl; tcl; or python.

There are tools to select columns, filter text for regular expressions, join files on a key, and reshape arrays, but I felt like there was one that was missing. After chaining tool after tool together and finally cajoling the data into a format and subset that I want to process and explore, I'd have to redirect the stream to a text file and read it from R. Clearly, if I’m to perform some complicated machine learning algorithm with this data, this is the best way to go. But if I just want to take a peek at the spread of the data, or quickly compare means, this is overkill.

Introducing qstats

Inspired by this gap in the Unix toolchain, I wrote a tool, qstats, that computes simple summary statistics from the command-line. It also includes data-binning and simple bar chart functionality. I designed it, in C, specifically to be as fast as possible, and bare-bones enough to work on any POSIX-compliant system without having to deal with outside dependencies. Let’s see it in action…

Functionality
By default, qstats will print R-like summary statistics on the given data. This includes the minimum value, the 1st quartile, the median, the mean, the third quartile, the maximum value, the range, and the standard deviation. You can use the -m flag to just get the mean. This will be faster because the data does not have to be sorted.

In addition to these statistics, qstats can also produce a frequency tabulation with an arbitrary number of "bins". Calling qstats with the -f10 flag will create 10 equal intervals and -f20 will create 20. Just calling it with -f will use Sturge's rule to come up with a reasonable number of bins in most cases.

Finally, with the -b flag, qstats will output a histogram-like horizontal bar-chart. Much like with the -f flag, you can supply the number of intervals to create. We will see an example of the bar-chart at work in the next section.

Rudimentary spread visualization

To view the spread with a bar-chart, let's output samplings from two distributions, the normal and the chi-square...

# one million normally distributed with a mean of 100 and a standard deviation of 10
millnorm <- rnorm(1000000, mean=100, sd=10)
write.table(millnorm, "millnorm.dat", col.names=FALSE, row.names=FALSE)

# one million values sampled from the chi-square distribution with two degrees of freedom
millchi <- rchisq(1000000, df=2)
write.table(millchi, "millchi.dat", col.names=FALSE, row.names=FALSE)
Normal distribution

Visualization of normal distribution

Chi-square distribution

Visualization of chi-square distribution (two degrees of freedom)

Speed comparisons
Let’s create a file of 100,000,000 floating point numbers to test speeds with R…

# sample from normal distribution with a mean of 100 and a standard deviation of 10
one.h.m <- rnorm(100000000, mean=100, sd=10)
write.table(one.h.m, “one_hundred_million.dat”, row.names=FALSE, col.names=FALSE)

The resulting file is 1.7 GBs large.

  • R
    The R script that we’ll time will look like this…

    #!/usr/bin/rscript —vanilla
    frame <- scan(“one_hundred_million.dat”)
    summary(frame)
    

    and the timing...

    $ time ./rtest.R
    Read 100000000 items
       Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
      44.95   93.26  100.00  100.00  106.70  157.00 
    ./rtest.R  210.66s user 3.57s system 99% cpu 3:35.08 total
    

    3.5 minutes

  • Awk

    $ time awk '{ x+=$1; next } END { print x/NR }' one_hundred_milion.dat
    100.001
    awk '{ x+=$1; next } END { print x/NR }' one_hundred_milion.dat  128.34s user 0.56s system 99% cpu 2:09.01 total
    

    2 minutes.

    Note that this only computes the mean, not any of the other summary statistics. Some of these require sorting, which takes more time.

  • sort command

    $ time sort -n one_hundred_milion.dat > /dev/null
    sort -n one_hundred_milion.dat > /dev/null  151.89s user 3.46s system 99% cpu 2:35.72 total
    

    2.5 minutes

  • qstats

    $ time qstats one_hundred_milion.dat
    Min.     44.947
    1st Qu.  93.2553
    Median   100.001
    Mean     100.001
    3rd Qu.  106.747
    Max.     156.997
    Range    112.05
    Std Dev. 10.0002
    Length   100000000
    qstats one_hundred_milion.dat  53.62s user 1.04s system 99% cpu 54.722 total
    

    a little less than a minute

  • I show these comparisons here not to compare this small program with these great, veteran tools. Instead, I just want to underscore the point that smaller, very-few-trick-pony, specialized programs can afford to be faster than their more capable and robust counterparts. When these small tools will do the trick, they can not only be faster and simpler to use, but they also comport more with the Rule of Parsimony from the Unix philosophy.

Final words
The source code for this project is on Github with installation instructions. You can also download and install from this tarball. I've tested it on OS X, Debian, and NetBSD, but it should compile without any issue on any POSIX system with a reasonably recent C compiler. Please let me know if there are any installation issues for your system.

Please fork me and feel free to send a pull request or add an issue to the repo. I hope you like it!

share this: Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail

The state of package management on Mac OS X

It's that time again; I suspect that Mavericks will be released in the next few weeks, so I get the once-every-year-(or-so) chance to experiment and modify the hell out of my OS X installation because I'll just do a fresh install soon anyway. This time around I'm experimenting with package managers.

I've actually tried really hard to avoid ever having to use them. I started using Slackware in high school, and after some brief experimentation (in college) with Ubuntu, I took up OS X as my main OS. But, since building from source code is somewhat of a nightmare on a Mac--at least compared to what I was used to--I started to look into package management solutions.

The terrain was difficult to navigate. It seemed like people had some really strong opinions on which one was the best and which ones were on their way out. Since I didn't know who to believe, I just stuck to manual building. But, since I'm going to get a tabula rasa in a few weeks, I thought I'd take this opportunity to document this terrain exploring and present my finding in the most impartial manner that I'm capable of.

Before I start, I want to make a few things clear. (1) There is some disagreement on what actually constitutes a package manager. Here, I'm referring broadly to any centralized software installation framework that tracks or resolves dependencies, whether it builds from source or not. (2) I haven't had the time to become an expert on all of the managers I audited, so keep that in mind. (3) Not only are all of these package managers open source, but many of them have robust configuration options, so I'll be talking mostly about default behavior from the perspective of a new user.

If my old editions of O'Reilly books discussing Mac software are any indication, MacPorts and Fink were the two best options available. Then Homebrew came on the scene and a lot of people seem to be raving about it. I started off with the intention of only trying out these three but in the course of my research, I learned about two others that I wanted to give a chance.

To see a table summary of my findings, you can just scroll down to the end of this post.

Rudix
Rudix is a binary-only package manager that attempts a "hassle-free" way of getting Unix programs on a Mac. It doesn't have many packages available yet, but it has no trouble at all installing and uninstalling the ones that it does offer. For example, their 'Go' installation was the most painless installation of a language that I've ever experienced. My complaints are that (a) the binaries go directly to /usr/bin, so they are not sandboxed, and (b) the man files for these tools were not installed with the binaries.

MacPorts
MacPorts was one of the most recommended package management solutions that I came across in my research. It also probably attracted the most flak. It was built with the likeness of FreeBSD's Ports system, so it's a source building manager. What I liked about MacPorts was the fact that the installation was painless (it updated my PATH for me!), the compiled binaries were sandboxed in /opt/local, and the wealth of packages available was hard not to love.

An interesting thing about MacPorts is that it eschews Apple-supplied libraries and links sources against its own. A benefit of this is that it can ensure a consistent experience across OS X versions and whatever whimsical decisions Apple may choose to make in the future. The drawback to this approach is that building what appears, prima facie, to be a small package may require an extraordinarily large amount of huge programs and libraries to be built as dependencies.

Fink
Fink is modeled after Debian's dpkg and apt-get. Having used Debian-based distros in the past, I was excited to see what Fink had to offer. Like apt-get, Fink can install binaries or build from source. What wasn't like apt-get was that a completely different command was used to build from source ("fink") than to install the binaries. This was somewhat confusing. Furthermore, there is no binary installer for 10.6 to 10.8, so installation was a bit harrowing. Once it was installed, though, and I got used to the separate commands and its differences to "apt-get", I was pleased that my PATH was automatically updated and that the installed binaries were appropriately sandboxed.

Homebrew
Like I mentioned above, a lot of people are really excited about Homebrew. It is being developed with the intention to correct (what it perceived to be) MacPorts' shortcomings. From what I can tell, it tries really hard to work with OS X's existing framework/libraries. For this reason, Homebrew is probably a good choice for someone who is using it to install the occasional tool on a single user system.

A neat thing about Homebrew is that it is written very simply in ruby. Its "recipes" to install packages are easy-to-read ruby scripts. They are also very easy to modify and the community encourages upstream development.

Something not-so-neat about Homebrew is that it is publicly antagonistic towards MacPorts. This is probably something that only I care about, though.

pkgsrc/pkgin
Again, I started with the intention of only auditing Fink, Homebrew and MacPorts. When I learned about pkgsrc, I thought that it was too obscure to be a serious contender and I was considering not looking into it further. I am so glad that, for completeness' sake, I decided to try it out because I virtually have only good things to say about it.

pkgsrc started as NetBSD's package management solution. Given NetBSD's dedication to portability, it is perhaps not a surprise that their package manager would attempt to follow suit. It has now been adapted for use on over a dozen different operating systems. Among these are AIX, Solaris, HP-UX, GNU/Linux, Windows (via Cygwin and Interix) and, of course, OS X. It is the default manager on DragonflyBSD and was even the default manager on a now-discontinued GNU/Linux distro, Bluewall Linux. It is similar to (and, indeed, was forked from) FreeBSD's ports system.

I don't think many Mac power-users know that this is an option for them which is a shame because it turned out to be my favorite. After following some fairly simple steps, a mature and sophisticated package manager with over 8,000 packages is at your disposal.

Probably the best thing about pkgsrc from the perspective of Mac users is a tool called pkgin. It's an apt-like tool for installing binaries from pkgsrc. Installing strange Unix tools on OS X *can not* be easier.

The only caveat I should mention is that I haven't tested installing Python with it because I'm still too far away from Mavericks to risk botching my environment that badly. I suspect that it would cause issues because pkgsrc, being a NetBSD project, can't be as aware of OS X framework idiosyncracies as a Mac-specific package manager can.

I'd like to write more on this topic, but this post is getting unwieldy. I plan to talk more about pkgsrc and OS X in another post but, for this one, I'll conclude with the "too-long-didn't-read" version of my journey through package-manager-land.

categoryRudixMacPortsFinkHomebrewpkgsrc / pkgin
Homepagerudix.org
MacPorts.orgfink.thetis.ig42.orgbrew.shpkgsrc.org and pkgin.net
Twitter@rudix4mac (updates often)@macports (last tweet in July)@finkmac (hasn't had update since 2010)@machomebrew (very active)@pkgsrc (last tweet in September)
Year project started2005200220012009Support for Darwin added in 2001
Number of packages488 (but `rudix available | wc -l` says 351)17,680 (but `port list | wc -l` says 17,686)7,951. `apt-cache search . | wc -l` says 209 stable binary .deps)2,498. `brew search | wc -l` says 2,591. This is not counting various extra "taps"8,884 binaries for OS X (according to `pkgin available | wc -l`)
Source/binary/both?Binary onlyTraditionally only sourceOption for bothSource, but also binaries through "bottles"Both. Traditional pkgsrc will do both but using only pkgin will grab the binaries
Language written inPythonTclPerl (front-end)RubyC
LicenseBSDBSDGPL :(BSDBSD
Gui optionsNot really... but there's an internet package browsing optionCurrently threeTwo: fink commander, and phynchronicityNope, but online package browser at Braumeister.orgOnline package browser at pkgsrc.se but none others that I can find
Default prefixDirectly to /usr/local/opt/local/sw/usr/local/Cellar. Programs symlink to /usr/local/bin/usr/pkg
Power-PC supportNot anymoreYes because it is built from sourceYesNot traditionally, but there are forks available that might provide this functionalityNot unless you build from source
Lastest GCC availableNot available4.8.14.84.9No binary available but pkgsrc has 4.8
Python stuffNot availablePy27 and 33 and a lot of great packagesPy23 and 33 and a lot of great packagesPy27 and 33. I couldn't find any packages but the python installs pip and easy_installPy27 and 33 and a lot of great packages. (see warning above)
Installation of package managerVery easy and fastVery easy and fastNightmarish (no binary installer for 10.6 - 10.8)Easy as pieVery easy and fast with these instructions
Uninstallation of package managerEasy and painlessHell-ishVery easy and fastRelatively easy if you follow this gist: https://gist.github.com/mxcl/1173223Not sure, probably just a rm -rf-ing the /usr/pkg and /usr/pkgsrc directories
Installation of packagesExtremely easySlow, since it builds from sourceThe source builds are understandably slow, but the binaries are (obviously) quickSource compilation is obviously slow. I've had some linking issues sometimes.Trivially easy
Uninstallation of packagesEasy and painlessEasyEasy and fastVery easyTrivially easy
Community supportNot very much is requiredGreatNot so greatVery very goodA few websites have some great documentation but some other information it is hard to find OS X-specific info.
DevelopmentGit. Primarily lead by one person. 5 contributors.Subversion. Very happening. Many many developers.Git. 14 GitHub contributors. Commits are infrequentGit. Most vibrant. Over 3,000 contributors. "Recipes" for compilation are easily modified and you are encouraged to submit pull requests. This project is very easy to contribute to.Pkgsrc is CVS. Pkgin is Git. pkgsrc is well backed by the NetBSD Foundation
share this: Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail