Articles / Stop the autoconf insanity!...

Stop the autoconf insanity! Why we need a new build system.

Any veteran GNU/Linux user has, at one point or another, run across a package which used the autoconf/automake toolset. There is a lot to be said in favor of this emerging standard. Running "./configure && make && make install" usually results in a working installation of whatever package you are attempting to compile. The autoconf tools are also portable to almost every *nix platform in existence, which generally makes it easier to release your program for a large variety of systems. However, despite these few pluses, the auto* tools are constantly a thorn in the side of users and developers alike.

Problems for Users

Let's take a typical autoconf package. I'll call it package-xyz. Joe GNU/Linux User has just downloaded and untared this package. Like most users, the first thing he does is change directory to the newly-unpacked source tree and run a quick "ls" to see what files are there. To his delight, he discovers a "configure" script, indicating that he probably doesn't have to do any editing of Makefiles or other such craziness. Little does he realize the troubles awaiting him.

He again does the typical thing and runs "./configure --prefix=/opt". The configure script runs for a while, then exits with an error which basically translates to "You have an autoconf version which is three weeks old; please upgrade", but this is displayed in the most cryptic manner possible. He won't realize this is indeed what the error message means until he runs a few quick Google searches. He really wants to install this program, so he doesn't give up quickly. A few minutes later, he's run apt-get upgrade (or run whatever auto update his distribution uses).

He decides that he would like to customize his package, so he runs "./configure --help". A list of options puke themselves all over his screen. Undeterred, he runs configure again, but this time pipes the output to his favorite pager. He's forced to parse this output because there is no real standard for the options to pass to configure. He wants to use GTK, but how does he do it? Is it "--with-gtk" or "--enable-gtk"? Perhaps this time it's "--enable-gnome", or maybe "--with-toolkit=gtk". In any case, he finally "gets" it and runs the configure script, customizing the package to his heart's delight.

Then he discovers he's missing lib-lzw-3.2.3.4. He takes the time to go through the same rigamarole with that package and, after a bit of tinkering, gets it to install. Then he discovers that configure still thinks that the library is not there. He does a bit of investigation and discovers that he needs to delete config.cache. That's if he's lucky and the "cached" option isn't in some other random directory or file (in which case he would remove the source tree and start again from scratch).

The configure script now runs properly and outputs a nice Makefile, so Joe runs "make". To his surprise, configure, for no apparent reason, decides to run again. That's ok; Joe sits patiently through the same tests as they run over and over (this time, hopefully, cached). While looking through the log messages, he happens to wonder if there is, perhaps, maybe, some way to not have to run the same 50 tests over and over and over. After a while, make runs, starts compiling the program, and errors out. Joe reruns make to see what the error is, redirecting stderr so he can scroll up and see the original error message of the 95 presented. To his amazement, while configure picked up the fact that he did indeed have lib-lzw-3.2.3.4 installed, it failed to realize that the header files were located in /usr/include/lzw, not in /usr/include. At this point, Joe has a real problem.

Our friend Joe is a seasoned *nix user, so his gut reaction is to start Emacs (one thing the FSF got right) and edit the makefile. He does this only to discover a 50,000-line-long monstrosity. He greps it, looking for the right variable to edit. Of course, there are 500 different $(INCLUDE) settings spattered throughout. Little does he realize that the variable he's looking for is three directories down and called $(YOUD_NEVER_GUESS_THIS_VARIABLE_NAME_HAHA). He finally finds it and edits it to the proper value. configure again decides to run for no apparent reason (even though "make" skipped it the last five times) and overwrites all this hard work.

Now, Joe is presented with an interesting problem. He realizes that he needs to edit something besides the Makefile. But what does he look at? configure.in? Makefile.am? Makefile.in? Makefile.yoyo? Makefile.banana?

By now, the average user has done one of the following:

  1. Given up and tried a different package.
  2. Shot himself in the head with a twelve gauge shotgun.

Joe is feeling masochistic, and continues trying. I'll save you the pain of talking about his later problems and the pain of hearing about Sally FreeBSD user, who wants to configure a package to run under GNU/Linux on her iPaq.

Problems for Developers

Your average developer has no clue what m4 is or how it works. Most Unix people (except the old diehards) have not even heard of m4. That didn't stop the autoconf guys from using it.[1]

Here's the issue: In order to be able to write your autoconf setup files properly (without a considerable amount of pain and suffering), you must first learn m4. Let's follow the path of a GNU/Linux developer, a creative lady named Jane.

Jane is a seasoned developer. She is accustomed to writing Makefiles, but decides she wants the extra portability that autoconf allows. She isn't familiar with m4, but thinks she can wade through building her files anyway.

First, she looks for a tutorial on how to use autoconf. Sadly, she will spend a lot of time looking. If she's lucky, she'll discover a five-year-old online copy of a book that might help her accomplish 5% of what she needs to do.

The GNU info system isn't much help, either. The documentation she needs is spread over 50 info pages, divided into three packages. These divisions don't help much, as configure.in sometimes makes Makefile.am do strange things. Eventually, she gives up writing the files and does what every other developer does with the auto* tools: copy someone else. There are perhaps 30 developers worldwide who understand the autoconf tool chain. (I doubt there is anyone who quite understands the braindead Makefile that is puked out.) This very problem is what caused many of Joe's issues. Well-written configure scripts are rare things of beauty. (They are mostly found in GNU projects, but I've seen a few others.)

The Root of the Problem

The autoconf/configure system is clever, but in the end, it is just a very creative hack to work around the deficiencies of Makefiles. Those who have used *nix for a long while have seen several such workarounds. There's the Sleepy-Cat libdb approach of having 50 Makefiles, one for each architecture. There's the "config.h" approach seen in several packages. There's "imake", used by X11 projects.[2]

It's easy to see why autoconf resorts to outputting 50,000-line-long Makefiles. Makefiles lack support for complex dependencies and complicated error checking. Many of the above tools use the same hacks, like using "touch" to create files used for dependencies, with complicated build rules for making those files. A paper on the various Makefile techniques could fill volumes.

Solutions

There are several unique and innovative projects being developed to help sort out the quagmire in build processes.

Two such solutions are "SCons" and "Cons". Both try to replace Make with something far more flexible. However, they both depend on tools which are a bit less standard than sh and m4: Perl and Python. Still, both languages are quickly becoming standard on the major Unixes. SCons is a bit less mature, but is preferred by the author of this paper. Using either allows a much clearer build process for complicated software. There is no requirement to generate a set of build instructions from a template.

Another interesting piece of software is A-A-P. It allows a user to create "recipes" for building software.

Any of these is cleaner and easier than autoconf. I like the familiarity that Make brings to the table, but it's time to face up to the fact that Makefiles introduce far more complexity than needs to be.

  1. To be fair, m4 is a nice tool if you know how to use it.
  2. imake is perhaps the only thing that makes autoconf look sane, but I digress.

RSS Recent comments

21 Jun 2003 00:49 gback

You're dead on
In addition, automake/libtool/autoconf are a versioning nightmare. Output like this is not uncommon:

checking for libtool >= 1.3.4 ... yes (version 1.4.2)
checking for autoconf >= 2.52 ... Too old (found version 2.13)!
checking for automake >= 1.6 ...
You must have automake 1.6 or 1.7 installed to compile The GIMP.
Get ftp.gnu.org/pub/gnu/au...
(or a newer version if it is available)
checking for glib-gettextize >= 2.0.0 ... yes (version 2.0.1)
checking for intltool >= 0.17 ... yes (version 0.17)

Please install/upgrade the missing tools and call me again.

Of course, if you upgrade you break something else.

21 Jun 2003 01:07 msameer

donno what i'm sayen.
If the package require autoconf/make/foo/bar higher than what you have, then it's not the auto* problem, It's the developer.
that's one.
The other thing is that if you don't have enuff experience Don't compile source packages. stick to your distro. packages
Yes i know it's not easy "it's a pain actually" to modify the Makefilez but
make CC="gcc -I/usr/include/lzw" is easy and solve some problemz

most of the problems with autotools are really the developer faults. why use the latest autoconf ?
but the final user don't have to install autoconf. what are you guys talking about ? the configure script is a *simple* bash script.
if you -the user- modify the Makefile.am then you bought trouble. poor guy you'll need the bleeding edge automake!

21 Jun 2003 01:08 ParkerPine

That just had to be said
I can vigorously agree with most that was said in this article.

autoconf is not so much a problem for the end-user (as long as it works which happens to be the case most of the time). But indeed I once tried to use it as a developer and frankly, I gave up after a while. The documentation I found on the internet was ludicrous and - just as described in the text - outdated. I found several scattered documents that contradicted each other severely: Each had a different opinion in which sequence I should run the various auto* tools. (needless to mention: none of the suggested orders worked).

I do prefer those projects using it, though. The usual steps of "./configure --help; ./configure --args=...; make; make install" is familiar and convenient. But now I understand why so many projects (X, exim, Perl partly to name just a few) are gradually leaving this well-beaten track and use their own (or some other) installation frameworks.

21 Jun 2003 01:14 reduz

Same thoughts.
Because of such, I have decided to switch
my build system to SCONS:
www.scons.org
which is an awesome python-based build system/library.
It's ages easier and you can basically configure your build
setup as you wish, with minimal amount of work (it's python
after all). I use scons in my personal projects and we have
switched to it at work too, where we use it to build a huge
project under linux/cygwin/msvc++. The only major downfall
of it is the lack of detection routines, for projects you plan on
distributing. Furtunatedly, nowadays you can detect most
important libraries using "pkg-config".

Down with autotools!
packages

21 Jun 2003 01:23 wtarreau

Not the only problem
Hi !

Interesting article. I would add that another major problem of
autoconf is that the configure script stops on the FIRST missing
dependency. This makes compilation sessions very long because
if it needs 10 additionnal packages, you have to restart from the
beginning after each download, and sometimes you cannot
download from the same place you're compiling !

I would strongly prefer a report saying that I need to add X,Y
and Z packages so that I can download them all and compile
them all.

Willy

21 Jun 2003 01:58 arbonline

What about ant?
ant.apache.org/

I think a new auto{...} is need but it should be developed using something like RFCs, shoud be lightweight (ant isn't).

21 Jun 2003 02:26 BlueLightning

Definitely
I couldn't agree more. The automake/autoconf
tools are a nightmare to use, poorly documented,
and hard to maintain. As a developer I really
struggled to make just a few modifications to my
project's auto* setup (created for me by
KDevelop). We do need a better solution, but I
would prefer that it was one that wasn't
dependent on having a scripting language runtime
installed.

21 Jun 2003 02:32 dack

Raggin' on auto*
Nothing against anyone who doesn't like the autoconf/automake system, but this article seems to be nothing more than a mindless rant. Many of the "issues" mentioned are simple not an issue at all (using a pager?!?! oh no, anything but that!). I would expect that an article that is so bold as to hail the end of the auto* system would at least give some subtantial reasons why the technology is inedequate.

21 Jun 2003 02:33 Avatar blindcoder

The best thing
Well, for me, autoconf and automake are the best thing since sliced bread.
With a few lines you can check if a library is installed, header files are there, and if you intermixe a handful of shell-codecode lines you can do almost anything.
Besides, it has helped us _a lot_ at ROCK Linux with creating packages without having to create a custom build-script for each and every piece of software we nat to use.

The next thing is, I don't think a biased article like this will change anything.

21 Jun 2003 02:51 bishopolis

You hit the nail almost exactly on the head
I almost toally agree with you - Emacs is the only thing GNU almost got completely right, and neither the autofoo suite nor their completely annoying insistence on their own stupid documentation format are an example of anything they've done that are even useful.

Typical GNU (Gnu's Not Upgradable) software whines if something's older than three weeks old, and, like perl, the other half of their stuff needs a specific version for stuff because, SURPRISE, it's not forward portable. The common rant I hear at the office involves the label "That week's release". Take a look at the lovely incompatibility between pre-3.2 GCC and post-3.2 GCC for a great example of why no business should ever put any money into any GNU product. They simply know nothing of backward or forward compatibility, nor how to work with the greater populace at large (cf .info, above). Nevermind RMS's bid to have GNU mentioned alongside Linux, which is quite obviously a different mentality.

BUT, and this is where my opinion diverges from yours: I like Make files. I like that they can be used to automatically figure a bunch of stuff. Heck, I like imakefiles, too, but I'm barely scratching the surface there. Makefiles, for me, are about the best and most portable method of shipping something with software that makes it easy to build.

Imho, makefiles - and the equivalent file of ONE ratified new replacement for makefiles - should accompany any software project, and only when the makefiles (or whatever) blow chunks hard should the developer need to consider a Configure solution, and only if that fails should the user need to consider an autoconf, and only when that fails... In short, enough of this 'rebuild our whole tool chain' crap or 'install this whole new redundant, non-standardized and pointless language of the week just to make makefiles' crap; we need a solution that's gonna work now and in a decade.

What it all boils down to is the problem that we, as Real World Software People, maintain stuff that was built 10 years ago, and we drag it out every year to rebuild it under this year's OS release and to check it for strcpy and obvious problems like that. We don't want ot have to upgrade (or downgrade) the entire friggin' tool chain, breed in random problems into other software projects that absolutely require a newer version, only so we can re-up/downgrade the tool chain when we're done. It's a waste of time, and time is like money that can better be spent on the security checking rather than the build headaches.

In all, though, A really excellent illumination of one of the biggest warts on the Open-Source software radar, and one that should be fixed - gradually and in a manner that's backward compatible.

I now have .sig fodder, and a rant to which I can point people when I'm tired of saying the same thing over, and over again. "Yes, I understand, GNU is great and holy, but, as I was saying..."

Thanks for an almost perfect article. You totally made my night.

21 Jun 2003 03:08 tojansen

Binaries & Why Makefiles in Python are bad...
Users:

There is an even better solution for the majority of users:
don't fix the make system, fix the way of distribution. A
regular user should never be required to compile the
software. It's only practical for small apps anyway. Who
wants to compile KDE, or Mozilla, or OpenOffice?


Developers:

While I admit that the automake/autoconf/make setup is
far from ideal, I don't see a serious contender for C/C++.
scons is no solution for the simple reason that an IDE won't
be able to parse and understand such a make file. As more
and more people are switching to IDEs (and new
developers coming from the windows world take them for
granted), scons is not applicable for all projects. And only
a solution that beats automake&co in all aspects will be
able to replace it.
ant is a good solution for Java that has all neccessary
properties, but unfortunately building of C/C++ code is
more complicated then Java code...

21 Jun 2003 03:33 one

how dare you?
... speak against the long-time-tested, proven-to-be-working, widespread-and-commonly-adopted, holy gnu autoconf-tools? EVERYONE uses them!
...
well, everyone but me. I tried to write some configure script for my little project, thinking "you managed everything so far, this cannot truly be too hard".

I spent two - fully wasted - days looking for documentation, trying to write makefile.am, configure.in, what.ever, then trying to copy someone else's, and finally stuck to my old hand-written shell script which compiles. gradually I upgraded this to a little bigger shell script, which is capable of recompiling itself. dependancies? it's my project. it runs on my machines.

well, I like makefiles, I like configure-based projects, because of the advantages mentioned. but writing one is - at least - a pain in the ass. we definitely need something new, so that someone like me - someone programming on a little application - is able to make it public in a simple way without one year developers work to write an install script.

In my personal view of the matter could we have many, many more programs or developers if there were a more simply way of spreading your application. I can write programs, but I cannot create a configure script. thats not how it should be, in my view.

21 Jun 2003 03:39 asiala

Re: Binaries & Why Makefiles in Python are bad...

> There is an even better solution for the
> majority of users:
> don't fix the make system, fix the way
> of distribution. A
> regular user should never be required to
> compile the
> software. It's only practical for small
> apps anyway. Who
> wants to compile KDE, or Mozilla, or
> OpenOffice?

And if for some odd reason advanced user has to compile the program there should be an easy way to do. Something like srpms try to do.

21 Jun 2003 03:43 phacka

think positive
do you remember the days when auto* were not widely used? i certainly do and it brings back a lot of not very pleasant memories. most of versioning & co problems are caused by authors of software, not auto* authors. if you are scared of less, well maybe your terminal has scrollback buffer and you can easily scroll back (even with mouse wheel in X). you can't expect Joe User to mess with developement tools and know anything of CFLAGS & co, prebuilt packages is what he is after. and for the rest of us, auto* works just fine and combined with checkinstall it is joy to roll your own packages tailored exactly to your needs. but that doesn't mean auto* is divinely perfect, remember that competition brings innovation.

21 Jun 2003 03:50 scotty69

It's up to the developers, not the tools
I switched from C/C++ to Java a couple of years ago. By now I'm a happy ant user, always having a slight shiver running down my spine when thinking of the make mess in my previous developer life. Getting started with a reasonably developed piece of Java is a matter of having the right JDK, ant and cvs installed on whatever box you have, and typing 'ant' (ok, may be after editing some weird build properties...), and it is made both for emacs-fans and IDE softies.

But nifty tools cannot help you if the developers don't take care for proper versioning and compatibility issues. Open source Java programming by now suffers from the same versioning mess. It's a CLASSPATH nightmare to have to maintain a single Servlet engine with a few webapps using a few common open source frameworks.

Nevertheless I think autoconf/make etc. should be praised, put into the shrine of great old software, and then a modern build system should be made ;)

21 Jun 2003 04:30 zanac

Re: You're dead on
I don't know "autostuff" (i use this work instead of autoconf/automake ;)), 'cause i work with QT i always used qmake, that is very easy to use, and it is portable (win32, unix, macos).

Otherwise my little project was recently ported to autostuff by another developers... and i must agree that is a nightmare: i'm trying to study how does it work... and it is crazy! I really need a very easy tutorial... but this is not time to tell about my problems :)

I want to say that the solution is make some tools that work like qmake, ie just add all c files in a section, headers in another section... and that's all! :)

21 Jun 2003 04:31 zanac

Re: You're dead on

> I don't know "autostuff" (i use this
> work

I mean "word", not work! :(
Excuse me for my bad English! :(

21 Jun 2003 04:32 algernon

A different opinion.
I will not talk about automake here - I don't like it, I don't use it. I have my own, GNU make specific makefile system, which I find more flexible. Yet I must say that unlike the author suggests, the Makefile generated from Makefile.am is NOT braindead. Yes, it is long and complex, but that doesn't make anything braindead, just harder to understand. You won't get the idea within 10 minutes. If one wants to understand it, he must study it. It took me like two days to fully understand an average Makefile generated by automake 1.4. From then on, when I looked at other Makefile.ams, I knew how the generated file would look like. The good thing with automake is that it is deterministic. You only need to understand it once, and that is nowhere near impossible. MakeMaker is way harder, if you ask me.

Autoconf, however, is a very good tool, and in my opinion, has good documentation. You don't even have to master M4 to use it (I for one, don't, and I've been using autoconf for years, without any kind of problem). Most of the things you want to do with autoconf can be accomplished using autoconf macros, a much higher level language than M4. And what exactly do you need? That can often be found in the info docs, the autoconf info docs, only. Everything an average developer wants is there. There is even a search facility in most info browsers, so finding information is trivial if you know what you are looking for. If you do not know what you want, that's not the tools failiure, but yours.

There is also the Autobook, which I found very valuable when I was trying to graps the bigger picture.

All in all, I heartly disagree with the author. And while there are problems with the autotools, and one toolchains does not suit everyones needs, those who can use it properly, and believe me, there are many such people (everyone who has a small amount of clue, and can master his documentation reading tools and google, is able to learn autotools quickly, in my opinion), should be encouraged to use them. Those, who make mistakes, should be guided back to the right track, instead of advising them to use something else, which they have to learn too.

Anyway, that's just my opinion. I use autoconf because it Just Works, and was easy to learn, unlike some of the other alternatives suggested in this thread.

21 Jun 2003 04:41 ralfengels

Re: Raggin' on auto*
Hi,

did you read the article?

I guess you didn't want to flame and so do I.
If you compile your projects you have less problems with dependencies (compared to rpm).

However the problems are all for real. How often did you try to write a config script? I tried two times but each time I had big problems. Last time I tried to add a new directory but it was not used because I did not include it in the configure.in file. No warnings there, no documentation.

All problems (non standart options, not noticed depenencies, no backward compatibility, ...) are human errors, but these errors are made because the whole process is so hard to understand.

In the end I think that the configure scripts are a good thing. Being so paranoid is a good thing. Ever tried to compile gimp on a SGI or on a Sun where you didn't have root privileges?

But we need to improve it. Better documentation, less complexity.

21 Jun 2003 05:31 simonkalteis

ahem...
I think as plain Joe User you don't need autoconf installed in order to compile software. The configure script is all shell only.
You might need it if you try to compile CVS releases which Joe User in general should avoid for they are mainly targeted at developers.

However, most people that just want the software ready in a few steps do better with downloading a binary distribution. The source is for those who want to customize their package; those people (like me) like fiddling with Makefiles and are willing to invest a bit of time.

I think the current system is just fine.

21 Jun 2003 06:29 jepler

Replace autoconf with scons? Apples and oranges
Autoconf is concerned with detecting platform variations before compilation begins. Scons is primarily a replacement for /usr/bin/make, specifying rules about the relationship between files.

You also complain about autoconf as though it's the reason (some) software has a long list of build-requirements. Is it better to give the user a message during an early step of the configure-compile-install sequence, or do you prefer hiding the information in a README, dying with a compiler/linker error, or just getting a buggy binary? (by the way, I think the standard way to make autoconf find a header that's in a non-standard include directory is, as far as I recall to run "env CC='/usr/bin/gcc -I/usr/include/lzw' ./configure ...")

As for being hard to learn, I don't care if it takes longer to learn autoconf than you're willing to spend. As a developer, it took me months to get a grasp of my first programming language. Later, in my first real programming job, it took me two years to unlearn an irrational hatred of tcl and tk. Now, forced to use C++ in a project I'm embarassed every time I have to ask a fellow programmer about some (invariably template-related) error message. So if I wanted to write a configure.in from scratch, I'd be prepared to spend a week reading documentation and examples before I declared that the tool was unusable. (like Perl, which I gave a good 4 years, dozens of small programs, and a book or two before I decided that calling it 'write-only' was still to generous)

21 Jun 2003 06:45 shirok

Separating issues
I've been using autoconf for quite a while, and recently have used automake. For me, those two tools are very different, with different goals and different design principles. And I always feel uneasy when people talk those two as if they're single tool.

In my personal opinion, autoconf's design is good, but the tool itself is badly abused. Its design is good because it achieves a simple goal (to generate bunch of shell commands to test features) by a simple method (macro expansion). Although the quantity of the generated script is indeed intimidating, it is simply built on top of small number of simple rules, so it doesn't take long for programmers to grasp the idea.

The beauty of macro is that you can mix the base language (shell script) and macro calls freely, which is very comfortable if you know shell programming. If you don't like the behavior of pre-defined macros, you can usually go by combining lower level macros with shell commands. The downside is that the generated script can easilly be bloated when macro is overused. I like to delegate complex test to a subscript or a small test program and to keep configure.in slim.

On the other hand, automake seems to do too many things at once. I assume it's main goal is to generate a lots of cliche of typical makefiles. However it also checks the distribution is sane and copies some "standard" scripts, which looks like a sign of creeping featurism for me, even though I can turn off the feature. Automatically generating the make rule to re-run "configure" again, as the author of the article complains, is another example of what automake does too much. Furthermore, if I don't like automake's default handling of primaries, I have to fall back to bare make rules, instead of extending existing primary behavior. I also like to have separate tool for creating distribution, instead of "make dist".

To be fair, I think automake works great if your package follows the way automake assumes. It only becomes a problem when you want to do some nontrivial tricks during a build process.

21 Jun 2003 06:49 anonononon

Cooker
Well, I might take this oppurtunity to announce my new project, cooker. (cooker.sourceforge.net (cooker.sourceforge.net)).

I only reg'd it last week, and am in the middle of uni exams at the moment, so it might be a few weeks before its at a releasable stage (alpha of course :)) but its already looking pretty good (though not really user friendly yet).

I would appreciate any comments or suggestions at the forums - sourceforge.net/forum/... (sourceforge.net/forum/...)

21 Jun 2003 07:20 cpbotha

CMake solves many of the mentioned problems
CMake is a cross-platform build system that does everything that the auto* tools do, except that cmake works on ALL platforms, including Windows (with *and* without cygwin). On windows one has the option of generating several different flavours of Makefiles OR project files for MS Visual C++ 6 or MS Visual .NET.

CMake has proven itself with several very large cross-platform software projects and is freely available from www.cmake.org/

21 Jun 2003 08:45 UnThesis

MakeNG is the answer!
MakeNG (makeng.sf.net/ (makeng.sf.net/) is made by a fellow wxWindows user, a really cross-platform GUI framework (w/ threads, sockets, et al).

MakeNG supports procedures, conditional statements, complex dependencies, and high-level script statements.

Unlike other makesystems, it is a patched version of GMake, which means it is COMPLETELY backwards compatible .. .the user won't even know they are using a MakeNG file if they have it installed on their system!

It *can* use Autoconf just fine, as my project, xMule, heavily demonstrates (freshmeat.net/projects... (freshmeat.net/projects...)). xMule has over 300 C++ files, requires 5 libraries, and has 2 internal libraries. MakeNG can build these with justa bout 50 lines of code!

Here's a quick peak of the main Makengfile:

include Compilation.flags

include $(MAKENG_LOC)Makefile.stdinc

var-submake-makefile := Makengfile

$(proc enter-component,xmule)

$(proc derive-from-component,c-app-inc)

$(proc add-script-options,build,$(GTK_LIBS)
`$(WX_CONFIG_PATH) --libs` $(LIBS))

$(proc add-required-
component,build,libxrc,build)

$(proc exit-component)

$(proc add-subdirs,src)

$(proc enter-subdirectories)

$(proc build-system)

Compilation.flags is created either by Autoconf or MakeNG, which ever is first. Thus there is no real reason to run ./configure except out of habit.

One of the best features of MakeNG is that you can also include regular gmake code!! It is *heavily* documented (80KB of documentation) and has tons of examples (128KB) plus extending it is SOOO simple, just a matter of editing high-level script files!

Un

21 Jun 2003 08:52 karellen

Re: how dare you?
Dan Bernstein uses a very simple [shell scripts mostly] build system for his [small] packages. I think there was some documentation on his site, cr.yp.to.

21 Jun 2003 08:58 polesapart

Improvements
It seems to me that with some improvements, autoconf and automake could do a better job. If I recall well, at least autoconf had no releases between 1996 and 1999, and had a quite slow development cycle at least until 2001. It preserved some of it's conceptual problems, but it still could be improved. In particular, I would recall:

- A better checking system. We don't have to check bit by bit if there is function X that returns value Y, if we already know that we're running under system Z with libc version P. That's a prerequisite. If there are optional features which would be not available to some particular kind of system, the test should be done only once, and results stored somewhere (i.e. /usr/share/autoconf-results), and only updated if there are prerequisite changes (libc upgrade, for instance). I know there are technichal difficulties to implement this, but don't come and say it's not possible, because it's, indeed.

- A better caching mechanism. The current caching implementation is stupid. There are lots of things that are checked more than once between configure's runs.

- It should be faster. For bigger programs, with a lot of checkings and stuff, all that check this, check that, and in the end run sed a dozen times, it no longer acceptable.

And for Automake, It should impose a turn to a non directory-recursive approach. There is some work being done on that area, but seems to me that it's more like a 'we support that as an alternative' idea. That would result in a several times faster builder, but better yet, a consistent one, read:

www.pcug.org.au/~mille...

For some problems using recursive make file systems.

If something works, People object to change, but every decade or so there's a better way to do things, and auto* tools should evolve into something better, if it want to keep alive as a present thing, and not as a museum one.

21 Jun 2003 09:22 SerpentMage

Re: What about ant?

> ant.apache.org/
>
> I think a new auto{...} is need but it
> should be developed using something like
> RFCs, shoud be lightweight (ant isn't).

You know you hit it on the head. Part of the reason why I have given up on C and C++ is because the build process.
There the Java people did it right!

21 Jun 2003 09:23 aldem

There is no need for auto*
Just write portable programs. Portable program does _not_ need any auto* stuff - by definition :)

I personally have no idea and I don't understand why do we need to support all this crap (ancient OS etc) which is in use only by 0.01% of all developers, but which is included in every auto* - "just in case".

I don't understand why some projects are so big - until I look inside and see that 95% is auto* stuff, and only 5% is actual code.

So... Do we really need it? Do we really want to encourage people to use something old, non-standard etc. providing the tool to cope with it, instead of encouraging them to uprage to something which is standard (or at least widely used)?

21 Jun 2003 09:25 compwiz

Just get to the point
This article could have beeen about one-fourth the size if the author had not spent all that time and space nitpicking about his personal aggravations and making completely biased generalizations.
"Most Unix people have not even heard of m4?" Where does one come up with such a thing? Anyone who has ever set up Sendmail, the most popular MTA on the Internet, has most likely used m4 at one time or another. It's a very powerful macro language, and IMHO very suited to autoconf's purpose.
Building from source is not easy. Supporting every platform imaginable is not easy. This article didn't even cover the most glaring problem, which is the incredible size of most configure scrips generated by autoconf due to the fact that they're written in straight Bourne shell. Many of his problems with generating Makefiles are not autoconf-, but automake-related. The only thing the configure script does with Makefiles is filling in variable fields.
That being said, the author is severely misguided in his target for this article. Somehow he can write an entire piece dedicated to the destruction of autoconf, and yet his only real point is that Makefiles are, as powerful as they can be, severely flawed.

No kidding. This is news?

21 Jun 2003 09:50 algernon

Re: There is no need for auto*

> Just write portable programs. Portable
> program does _not_ need any auto* stuff
> - by definition :)

Ever tried to compile a program developed on GNU/Linux on, say, AIX?

Or even BSD?

There are subtle differences, and autoconf is a great way to handle those. There are also OS-specific features (like sendfile(), kqueue(), sys_epoll, /dev/poll and the like), which can be more easily detected via autoconf than in the source itself (sendfile has different semantics on GNU/Linux, FreeBSD and HP-UX, for example)

> I personally have no idea and I don't
> understand why do we need to support all
> this crap (ancient OS etc) which is in
> use only by 0.01% of all developers, but
> which is included in every auto* -
> "just in case".

Developers might not use this crap, old OSes like Solaris or AIX, users do.

> So... Do we really need it? Do we really
> want to encourage people to use
> something old, non-standard etc.
> providing the tool to cope with it,
> instead of encouraging them to uprage to
> something which is standard (or at least
> widely used)?

Solaris, for example, is quite widely used, and as far as I looked, standards compliant. However, standards are not clear at times, which results in subtle differences. Nor is Solaris old.. Well, it might be old, but it is definitely not old crap, in the unmaintained sense :)

And anyways, you can't say "upgrade to Linux", when they have an OS that works perfectly fine.

21 Jun 2003 09:53 noda132

Author obviously misunderstands

Let's go through my issues with this one at a time...

He again does the typical thing and runs "./configure --prefix=/opt". Not to say the author has no experience with Linux, but this is not typical; in fact, it's wrong. There is a standard (www.pathname.com/fhs/2...) for installation prefixes.

You have an autoconf version which is three weeks old; please upgrade. This will not happen! autoconf, automake and m4 are not dependencies for the project!

He wants to use GTK, but how does he do it? Is it "--with-gtk" or "--enable-gtk"? It's --with-gtk. There is also a standard (www.gnu.org/prep/stand...) for this.

Then he discovers he's missing lib-lzw-3.2.3.4. Auto* were never made to handle dependencies -- this is what should happen (as opposed to compile errors).

Joe reruns make to see what the error is, redirecting stderr so he can scroll up and see the original error message of the 95 presented. This is entirely off-topic. It's a complaint with the make tool, and a very illogical one, since more errors are GOOD! :)

it failed to realize that the header files were located in /usr/include/lzw, not in /usr/include. This is the distributor's fault, not autoconf's.

He does this only to discover a 50,000-line-long monstrosity. Yes, but the nice thing is, all auto*-generated monstrosities look incredibly similar. Figure out one Makefile and you've figured them all out.

First, she looks for a tutorial on how to use autoconf. Yep, futile. But learning from example shouldn't be a hassle at all. And the auto* info documentation is very thourough.

The GNU info system isn't much help, either. This statement is off-topic, and wrong. If you're looking to learn something, you have to spend time to rtfm. Info files can be read by more than that command-line tool, you know :).

I doubt there is anyone who quite understands the braindead Makefile that is puked out. Um. It's a makefile. It declares variables and rules. Very straightforward.

Of course, I'd agree that many projects using auto* are hell to set up. This is because developers are lazy and don't bother to read the docs. Cars get into accidents all the time: is this the car's fault, or the driver's?

21 Jun 2003 10:14 reduz

Re: Author obviously misunderstands
Then it's not only the author, but the distro makers!
Talk to them and tell them to follow the "standard" too.

And If you like to rtfm, thats your problem, we know that with
enough time one can figure out anything, but you seem to be
missing the point on this. It IS complex, the documentation is
BAD and it is not intuitive or user friendly (programmers are
users too!). If you have used other build system like qmake
or scons you would understand what he means.


21 Jun 2003 10:27 caffineehacker

auto* work fine
I use autoconf and automake all the time and they work fine for me in my projects. As you said, a well written configure script is a thing of wonder. Well it's not that hard. I do admit that tutorials are lacking online and ofter do not give people the needed information for how to check for gtk+-2 or many newer libs, but they give you enough to write a good working one. Most of the scenarios Joe went through only would happen if he messed with other stuff or the developer was severly braindead(always a possibility). Look at my Makefile.am and configure.in at sf.net/projects/gtactoe (sf.net/projects/gtactoe). They may not be pretty, but they function perfectly fine,
~Tim~

21 Jun 2003 10:32 algernon

Re: Author obviously misunderstands

> Then it's not only the author, but the
> distro makers!
> Talk to them and tell them to follow the
> "standard" too.

Most of the distributions I worked with (Debian, RedHat, SuSE) seem to follow a standard, called FHS. The BSDs usually follow their own policy, which is slightly different, but not too much.

> And If you like to rtfm, thats your
> problem, we know that with
> enough time one can figure out anything,
> but you seem to be
> missing the point on this.

No, he doesn't. If you do not spend the time to learn a tool, you will not understand it. That simple.

> It IS complex, the documentation is
> BAD and it is not intuitive or user
> friendly (programmers are
> users too!).

I wonder what documentation you refer to. I found both the autobook and the autoconf / automake info docs good, and autoconf quite intuitive, especially for the user.

> If you have used other
> build system like qmake
> or scons you would understand what he
> means.

I, for one, did try, and I'm sticking with autoconf, because with wise usage, you can get around all of the problems mentioned (maybe not the size and the complexity of the generated configure file, but size doesn't really matter too much nowadays, and you don't have to understand configure. If you understand configure.in, that's enough in my experience).

21 Jun 2003 10:41 aldem

Re: There is no need for auto*

> Ever tried to compile a program
> developed on GNU/Linux on, say, AIX?
>
> Or even BSD?

Yes, I did. I have a project library, which supports sockets, threads etc. - it compiles (and works) cleanly under Linux, FreeBSD, cygwin and Borland C++. I used few ifdefs (really _few_, and only to distinguish Win32/*ix platform), but that's all. So? :)

> There are also OS-specific features
> (like sendfile(), kqueue(), sys_epoll,
> /dev/poll and the like), which can be
> more easily detected via autoconf than
> in the source itself

The program which makes use of OS-specific features is not portable by definition, or do I miss something? If system do not provide ANSI C nor standard C library - I won't use this system, because it is non-standard.

I don't care about users who still use non-standard systems - this is their problem after all. It is all like "please don't use HTML in emails and more than 80 columns" - hey, look around, it is 21st century already :)

What I am sure in - until old and outdated (or just badly designed) systems are supported, we will never get rid of them. But we should - sooner or later. Like with y2k problem - nobody cares until it happens...

No offense... Just my humble opinion :)

21 Jun 2003 10:48 kc5tja

Re: Definitely

> I couldn't agree more. The
> automake/autoconf
> tools are a nightmare to use, poorly
> documented,
> and hard to maintain. As a developer I
> really
> struggled to make just a few
> modifications to my
> project's auto* setup (created for me by
>
> KDevelop). We do need a better solution,
> but I
> would prefer that it was one that wasn't
>
> dependent on having a scripting language
> runtime
> installed.

Ummm....then you don't want to use even Bash? And Make itself is a scripting language, albeit a special purpose one.

If you want complex dependency management, you're going to need a language that supports such complexities. Perl and Python (the latter preferred for orthogonality reasons) are ideal for this task.

21 Jun 2003 10:55 algernon

Re: There is no need for auto*

>
> % Ever tried to compile a program
> % developed on GNU/Linux on, say, AIX?
> %
> % Or even BSD?
> Yes, I did. I have a project library,
> which supports sockets, threads etc. -
> it compiles (and works) cleanly under
> Linux, FreeBSD, cygwin and Borland C++.
> I used few ifdefs (really _few_, and
> only to distinguish Win32/*ix platform),
> but that's all. So? :)

Then that must be a really small library :) There are differences, non-subtle differences in getsubopt(), realpath() to name a few I recently ran into - differences between GNU/Linux and the BSDs. Such differences that you cannot #ifdef around, unfortunately. You must test how they work, and act upon the results.


> % There are also OS-specific features
> % (like sendfile(), kqueue(),
> sys_epoll,
> % /dev/poll and the like), which can be
> % more easily detected via autoconf
> than
> % in the source itself
>
> The program which makes use of
> OS-specific features is not portable by
> definition, or do I miss something?

My definition of portable is something that works on many systems. Of course, if an OS-specific feature is not provided, I use a POSIX or ANSI C replacement. However, if an OS provides features I can use to make my program perform better, I will use them, and this needs more than #ifdefs.

> If system do not provide ANSI C nor
> standard C library - I won't use this
> system, because it is non-standard.

The standard has weaknesses, and allows minor differences. Those can bite you in the rear, quite easily.

> I don't care about users who still use
> non-standard systems - this is their
> problem after all. It is all like
> "please don't use HTML in emails and
> more than 80 columns" - hey, look
> around, it is 21st century already :)

I still filter HTML mail to /dev/null (well, mostly). In e-mail, I want content, not markup. And 80 columns is too wide, 72-76 is enough :)

> What I am sure in - until old and
> outdated (or just badly designed)
> systems are supported, we will never get
> rid of them. But we should - sooner or
> later. Like with y2k problem - nobody
> cares until it happens...

Not badly designed, differently designed, in my opinion. Where the standard allows that, it happens.

21 Jun 2003 11:37 SpiralMan

problem exists, regardless of whom is to blame
I have to say that as a user of slackware (and
thus a frequent user of the auto* tools) i have
experienced a lot of the problems mentioned in the
article. And while the article is definately biased
and also puts the blame in the wrong place some
times, that doesnt mean that these problems dont
exist and shouldnt be fixed.



while there may be standards in place for
configuration options, perhaps the configure
language should force the developer to use those
standards (im not a big fan of forcing people to do
anything, but when you have to cross compile a
package by specifying the target in the --host=
line, or redefine the CC env variable just to get
make to see /usr/local/include, there is a very
serious problem).



as a user, im often googling and getting headaches
trying to get auto* to find some library that i know
i installed, but that it just refuses to see, and as a
developer i wont even touch an auto* build
system unless something has built it for me
(frankly im more interested in writing software
without bugs and with useful features than trying
to figure out how the hell to get my working code
to actually compile).



in short, i dont care who's problem it is, id like to
see it fixed.

21 Jun 2003 11:40 asuffield

Re: There is no need for auto*

> % There are also OS-specific features
> % (like sendfile(), kqueue(),
> sys_epoll,
> % /dev/poll and the like), which can be
> % more easily detected via autoconf
> than
> % in the source itself
>
>
>
> The program which makes use of
> OS-specific features is not portable by
> definition, or do I miss something? If
> system do not provide ANSI C nor
> standard C library - I won't use this
> system, because it is non-standard.

Then your system will, frankly, suck. If you only use the POSIX-mandated fd polling mechanisms, then the program will run about as fast as a glacier; the platform-specific ones like sendfile(), sys_epoll, /dev/poll and kqueue() are significantly faster.

21 Jun 2003 11:51 asuffield

Re: Improvements

> And for Automake, It should impose a
> turn to a non directory-recursive
> approach. There is some work being done
> on that area, but seems to me that it's
> more like a 'we support that as an
> alternative' idea. That would result in
> a several times faster builder, but
> better yet, a consistent one, read:
>
> www.pcug.org.au/~mille...
>
> For some problems using recursive make
> file systems.

I have found this article to be confusing and misleading. It discusses several common problems with makefiles, most of which are not related to using recursive makefiles. Putting everything into one giant makefile means that make has to parse and process a huge DAG for every operation; this can make things slower, not faster. I have never seen convincing evidence that monolithic makefiles are inherantly "better", although you may be able to construct *some* cases where they run faster (trivial examples are likely to do this).

21 Jun 2003 12:09 aldem

Re: There is no need for auto*

> Then that must be a really small library
> :) There are differences, non-subtle
> differences in getsubopt(), realpath()
> to name a few I recently ran into -
> differences between GNU/Linux and the
> BSDs.

95% of my code is non system dependent at all. Other 5% is related to sockets, threads and such stuff - which I have already implemented. Yes, library is not too big, but not too small - ca. 2000 lines of code. But I hardly imagine how list or hash operations may be system dependent :)

> My definition of portable is something
> that works on many systems. Of course,
> if an OS-specific feature is not
> provided, I use a POSIX or ANSI C
> replacement.

If I can use something that is not system dependent - I'll use it. Regardless of how good alternative I've. This way I can be sure that my program will work - performance is not a problem nowadays.

21 Jun 2003 12:12 aldem

Re: There is no need for auto*

> Then your system will, frankly, suck. If
> you only use the POSIX-mandated fd
> polling mechanisms, then the program
> will run about as fast as a glacier; the
> platform-specific ones like sendfile(),
> sys_epoll, /dev/poll and kqueue() are
> significantly faster.

Not at all. This is not _my_ problem - if system provides bad performance on top of select(), that's definitely not programmers' problem. Good system will provide select() which will (actually) work on top of poll() (emulated or whatever).

And, frankly, I don't care if response time will be 1ms or 2ms, as long as it keeps below 10ms :)

21 Jun 2003 12:28 mindriot

Re: There is no need for auto*

> > Ever tried to compile a program
> > developed on GNU/Linux on, say, AIX?
> >
> > Or even BSD?
>
> Yes, I did. I have a project library,
> which supports sockets, threads etc. -
> it compiles (and works) cleanly under
> Linux, FreeBSD, cygwin and Borland C++.
> I used few ifdefs (really _few_, and
> only to distinguish Win32/*ix platform),
> but that's all. So? :)

If I want to install your package in /usr/local instead of /usr, or maybe in $HOME/local, what do I have to do to make it do so?

That's one thing I like about autoconf, and it's actually the main reason I use it for my projects. I don't need to write complex install targets in my Makefiles, and force the user to edit the Makefile to change the paths. autoconf generates all the boilerplate code for me.

By the way, this is also very helpful if you write .spec files for RPMs or debian build scripts. With rpm .specs, the usual way when generating the package is to build and install it into a temporary directory, about like './configure --prefix=/usr; make; make PREFIX=/tmp/rpm1234 install'.

Admitted, autoconf is extremely complex once you get to more complicated issues, but it does the stuff I mentioned very nicely.

21 Jun 2003 12:34 skx

Documentation ..

 Personally I found the info documentation a little hard to get to grips with, but I found the GNU Autoconf, Automake, and Libtool (sources.redhat.com/aut...) book very useful.
 The book is available online as well as in hardcopy and took me through converting my project into an autotool based project - along with the libraries it used.

 A lot of the proposed alternatives to autoconf/automake are very interesting, I've not used any of them to be honest - but as long as they don't rely upon end-users, (well installers of your package), to have special software they are fine.

 Half the reason that the generated scripts produced are baraque is because they rely upon nothing special or non-standard installed, I guess with the limitation of GNU Make.

21 Jun 2003 13:09 bryanhenderson

Re: Just get to the point
I would agree that most Unix people have never heard of m4, or at least that they don't know any more about it than that it exists. The first generation of Unixers is quite familiar with it, but they are a smaller fraction of Unix people every year.

Most Unix people have never set up Sendmail.

In 1995, I set up Sendmail. I quickly determined that m4 was at least as complicated as the Sendmail configuration language it was trying to spare me from, and that I would not use m4 anywhere else, so I skipped the m4 Sendmail macros and just put together sendmail.cf manually (well, from an example, actually). I have not heard about m4 since then until today.

21 Jun 2003 13:19 bryanhenderson

auto* obsolete
I largely agree. Autoconf was very important when it was invented, because Unix systems were so different. Different systems had different C library functions available, for example.

But today, it is easy to write code that compiles and runs pretty much everywhere, because all the operating systems have filled out to where they all have a useful common subset of tools.

Today, there are much simpler ways to make a package run on both Solaris and Red Hat Linux than using Autoconf and Automake.

21 Jun 2003 13:24 adamized01

New solutions thats what we need.
We need a real solution to the versioning mess.

Update dependancy A brakes Dependancy B , update B brakes G, H, R and Q.... Update those things brakes critical system files or overwrites system config files ... Reinstall the system brakes updates... brakes brakes brakes....

Sorry for the rant about things braking. However by now someone should have found a real solution and not just reinvented the wheel. Since lots of software today uses autoconf, automake, auto.... It makes sense to improve things there and not just switch to something else that may or may not be better. If we can't get the old stuff right, what hope is there for anything new?

Right now I would like to suggest creating a powerful gui app capable of using the existing autotools and compensating for thier shortcomings...

Of course I am probably wrong and lame for even suggesting solving the shortcomings of the existing autotools and development processes.

Heck at least linux is cleaner and more stable than windows.

21 Jun 2003 13:34 asuffield

Re: There is no need for auto*

>
> % Then your system will, frankly, suck.
> If
> % you only use the POSIX-mandated fd
> % polling mechanisms, then the program
> % will run about as fast as a glacier;
> the
> % platform-specific ones like
> sendfile(),
> % sys_epoll, /dev/poll and kqueue() are
> % significantly faster.
>
>
> Not at all. This is not _my_ problem -
> if system provides bad performance on
> top of select(), that's definitely not
> programmers' problem. Good system will
> provide select() which will (actually)
> work on top of poll() (emulated or
> whatever).

The reason why all these good systems do not do it that way, is because the limitation lies with the select() and poll() interfaces. The platform-specific interfaces (sys_epoll, kqueue, /dev/poll) are generally several *thousand* times faster, and operate in a completely different fashion. poll() is only marginally faster than select(); its primary advantage is that it does not have an inconvinient limit on the number of fds it can handle at once.

> And, frankly, I don't care if response
> time will be 1ms or 2ms, as long as it
> keeps below 10ms :)

Try 30ms versus 10us (that's microseconds, for a speedup of around 30000).

www.kegel.com/dkftpben... (www.kegel.com/dkftpben...)

Note that this is not in your user interface, but in the inner polling loop of your web server, and if you're using select() or poll(), it usually is *the* bottleneck.

21 Jun 2003 13:38 bryanhenderson

Re: ahem...

>
> The source is for those who want to
> customize their package; those people
> (like me) like fiddling with Makefiles
> and are willing to invest a bit of
> time.

And for people who aren't running one of a small number of totally standard platforms, such as Windows 2000 or out-of-the-box Red Hat Linux 7.3. Binary distributions are available only for totally standard platforms.

And that's why an automated 'configure' script is important -- precisely for people who don't want to fiddle with make files, but don't have a binary distribution available for them.

For the make file fiddlers, Autoconf/Automake are a disaster. My system is just unconventional enough that Autoconf frequently makes mistakes in determining what's available on my system and what it takes to compile something. That's OK, because I understand the system completely. But it is extremely difficult to wrest control of the build from Autoconf/Automake. It involves reading autogenerated files that were never intended for human readability, modifying lots of identical make files, etc.

21 Jun 2003 13:42 stigbrau

another alternative: package-framework

For people that are fed up with the GNU Autotools, the package-framework (regexps.srparish.net/s... ) configure and build system by Tom Lord might be worth taking a look at. It is not very mature yet, but it is easy to use for both programmers and users, and has some really nice features. It is also very small, and easy to understand in itself. It is not "magic", and thus easily extendable if it does not do quite what you want.

Though `package-framework' do require the programmer to write `Makefile.in's, these are typically less than 3 lines for simple programs and libraries. Also, the system makes it really easy to bundle several libraries and programs into a big distribution and have them all configure and build in the right order, issuing only one command. The system was also designed from the start to build libraries and programs outside the source tree. package-framework is also designed to be non-invasive. The only thing it requires is a Makefile.in and a PLUGIN directory (itself containing 1-3 small files) in each directory containing source to be built.

Imagine a source distribution, on CD or DVD, of some big collection of libraries and programs (akin to GNOME or KDE). `package-framework' could configure, build and install all the individual libraries and programs for you, in the right order, with only one call to "configure", and one to "make install".

However, as noted above, the project is not mature yet. It could do with better language support. C, shell and scheme are supported, and support for more languages are fairly easy to add. Perhaps the most important deficiency of the system is that it currently cannot build shared libraries (only static ones).

Other important features of package-framework is that it is very small, and the code is easy to understand (the code consists of posix shell and some generic makefiles written for GNU make). It is not "magic", and thus easily extendable if it does not do quite what the user wants. It also does not need to be "compiled" to produce a configure script, like with the autotools, and what the user gets is exactly what the developer has.

But hey, I'm biased.

21 Jun 2003 13:50 bryanhenderson

Why binary distribution isn't the answer

> There is an even better solution for the
> majority of users:
> don't fix the make system, fix the way
> of distribution. A
> regular user should never be required to
> compile the
> software.

Unfortunately, the only technology we have today for distributing binaries that work everywhere is Microsoft Windows. I.e. make everyone run exactly the same thing (via whatever devious means necessary, including withholding source code and forcing computer manufacturers to use a standard configuration). Then you know you can distribute a binary and it will just work.

As we know, that technology has severe drawbacks, so we've settled on a compromise wherein things have to be distributed as source code and configuration programs have to be distributed with them.

Someday, technology may advance to the point that you can click on a button on a web page and have software installed exactly the way you want it, regardless of how specifically tailored to your own tastes and needs your system is. Discussions like this are what lead us there.

21 Jun 2003 13:59 niggerbottom

sort of a pain
i concur, the auto* tools can be a pain in the arse sometimes. mainly when you are dealing with an architecture that it doesnt like (i had to hack every single "configure" script i ran under OSX 10.1 to get stuff to build)

as far as ambiguity on flags, "./configure --help" usually clears things up for me.
in short, i'd agree this system is a little archaic, but it does the job. i've recently switched to gentoo linux, and their portage database has so many programs in it, i rarely run configure/make scripts myself anymore -- portage does it for me, with all the options.

so i guess in short, the question that comes to mind is -- is it better to replace the auto* tools, or build another layer on top of them to make things easier for the user?

21 Jun 2003 14:03 bryanhenderson

Yes, car crashes are the car's fault

> Of course, I'd agree that many projects
> using auto* are hell to set up. This is
> because developers are lazy and don't
> bother to read the docs. Cars get into
> accidents all the time: is this the
> car's fault, or the driver's?

This is dangerous attitude. Accidents are both the car's and the driver's fault, at least insofar as they can be prevented either by fixing the driver or fixing the car. Many, many fixes have been made to cars since they were invented that makes them less likely to get into accidents when driven poorly.

In Software, this is the attitude that says, "I'm not going to change my program because it's the user that's broken, not the program."

A configurator that requires a developer not to be lazy is deficient. It is a worthy goal to fix the configurator so that even a lazy developer can use it.

Maybe developers are setting up Autoconf wrong because it is poorly documented, or unnatural to use, or hard to learn. There's plenty of blame to go around.

21 Jun 2003 14:45 aldem

Re: There is no need for auto*

> Try 30ms versus 10us (that's
> microseconds, for a speedup of around
> 30000).

What for I need a car which can accelerate up to 1000mph if I can't find a good road to drive even on 200mph? :)

Eventually, all poll()/select() etc. interfaces will be replaced by event-driven equivalents, so... /dev/poll and so on are only quick tweaks for now.

It is better to design something that is good for decades rather than invent a new (somehow better) wheel every year.

Just a single example - the code that I write using Win32 API works on all Win* platforms - without single change. In every *ix I've to tweak something - because even read()/write() behavior differs on different *ix flavours (even versions!) sometimes...

21 Jun 2003 14:49 aldem

Re: There is no need for auto*

%If I want to install your package in /usr/local instead of /usr, or maybe in $HOME/local, what do I have to do to make it do so?

make prefix=/blabla/ install

And, I don't like programs thar are tied to their location upon compilation - this shouldn't be. Better to use relative paths - relative to program location or so.

21 Jun 2003 15:56 noda132

Re: Yes, car crashes are the car's fault

> % Of course, I'd agree that many
> projects
> % using auto* are hell to set up. This
> is
> % because developers are lazy and don't
> % bother to read the docs. Cars get
> into
> % accidents all the time: is this the
> % car's fault, or the driver's?
>
> This is dangerous attitude. Accidents
> are both the car's and the driver's
> fault, at least insofar as they can be
> prevented either by fixing the driver or
> fixing the car. Many, many fixes have
> been made to cars since they were
> invented that makes them less likely to
> get into accidents when driven poorly.

Funny that we get more car accidents nowadays than we used to, eh? Is this because we have more cars on the roads, or because people feel safer? Cars move faster, yet people trust them more: it causes more problems. I'd say my analogy was quite appropriate.

> Maybe developers are setting up Autoconf
> wrong because it is poorly documented,
> or unnatural to use, or hard to learn.
> There's plenty of blame to go around.

I'm a realist. When I see bad C code it's commonly intermingled with a shoddy ./configure script; I immediately think the developer is not experienced. Likewise, whenever I see a clean (i.e., like 30 lines) Makefile.am and configure.in, there is elegant code alongside it. This is no coincidence.

21 Jun 2003 16:19 polesapart

Re: Improvements

> I have found this article to be
> confusing and misleading. It discusses
> several common problems with makefiles,
> most of which are not related to using
> recursive makefiles. Putting everything
> into one giant makefile means that make
> has to parse and process a huge DAG for
> every operation; this can make things
> slower, not faster. I have never seen
> convincing evidence that monolithic
> makefiles are inherantly "better",
> although you may be able to construct
> *some* cases where they run faster
> (trivial examples are likely to do
> this).
>

That depends only on the kind of project we're talking about. It's easy to see why a project with sparse directories and targets would win (in terms of speed) by using a single startpoint (whether a single makefile, or several some conditionally-included ones): entering a directory, parsing a makefile, checking for implicit rules (gnu make does that by default), all over and over again, is clearly a loss. In the case of automake, every time it's parsing a makefile which is very similar to the others, which is a obvious redundancy. Not to mention that with actual make programs, it's not possible to generate a good dependency tree that reflect the multi-directory target dependencies correctly. If you want a real-life example, try the recent linux kernel 2.5 build system against the old one, as yet in 2.4; The new one is based on new paradigm and is both faster and easier to achieve more consistency. It's just like selecting a sorting algorithm, it's efficiency depends on the type of the data and other factors, you won't waste time building a complex makefile system for a simple program, but real-life programs from nowadays, which are multi-directory ones, like wine, could only win...

21 Jun 2003 16:34 asuffield

Re: There is no need for auto*

>
> % Try 30ms versus 10us (that's
> % microseconds, for a speedup of around
> % 30000).
>
>
> What for I need a car which can
> accelerate up to 1000mph if I can't find
> a good road to drive even on 200mph? :)

Broken analogy. There is no such restriction here. And even if there were, it's more like the difference between a car and a jet.

> Eventually, all poll()/select() etc.
> interfaces will be replaced by
> event-driven equivalents, so...
> /dev/poll and so on are only quick
> tweaks for now.

/dev/poll and so on are the replacements. What were you expecting?

> It is better to design something that is
> good for decades rather than invent a
> new (somehow better) wheel every year.

So you would consider it acceptable to limit yourself to 10000 users instead of handling hundreds of thousands just because you had a new one last year?

(Those figures are extreme; usually you hit the limit of poll() around a few thousand users, and push it back so far that it no longer matters compared to available bandwidth)

21 Jun 2003 16:37 asuffield

Re: There is no need for auto*

> And, I don't like programs thar are tied
> to their location upon compilation -
> this shouldn't be. Better to use
> relative paths - relative to program
> location or so.

How do you expect to determine the program location? This problem is hard to solve usefully, and impossible to solve completely.

21 Jun 2003 16:41 arpi

MPlayer core team also hates autosh*t
I fully agree with the article, at least the rants part :)

Autoconf is a good idea, but a very broken implementation.

Automake is just useless, gnu make is powerful enough to write short, efficient rules in it. No need to generate 50k+ files.

libtool ... ehh. it should not been created. it IS the real nightmare, when comes to compatibility/versions.
I understood that libtool was created to workaround linker incompatibilities, but since 99% of current opensource softwares compile only with the gnu build tools anyway (due to use of gnu-specific features), it lost the goal.

I vote for hand-written ./configure scripts, it even works for such large projects like MPlayer!
If you write it modular (using functions etc) then it's short (compared to sometimes 500-900k autoconf-generated scripts) and clear, readable. Even Joe user can easily read and understand/fix/change/extend/etc the configure script.
(not mentioning the <30 lines makefiles)

Although a good standard for configure option naming and use is really needed.

A'rpi

21 Jun 2003 16:48 asuffield

Re: Improvements

> That depends only on the kind of project
> we're talking about. It's easy to see
> why a project with sparse directories
> and targets would win (in terms of
> speed) by using a single startpoint
> (whether a single makefile, or several
> some conditionally-included ones):
> entering a directory, parsing a
> makefile, checking for implicit rules
> (gnu make does that by default), all
> over and over again, is clearly a loss.

Traversing a huge DAG for every command is clearly a loss. Entering a directory and parsing a small makefile is very likely to take less time than this.

> Not to mention that with
> actual make programs, it's not possible
> to generate a good dependency tree that
> reflect the multi-directory target
> dependencies correctly.

Nix. I've never had trouble doing this for complex trees, with or without automake. You just have to arrange your tree in a manner that reflects the dependencies. This is not a particularly limiting constraint; it's actually something you should be doing anyway, to make the code easier to navigate. There should not be any circular dependencies in your source tree, inner directories should not depend on things at a higher level, and so on.

> If you want a
> real-life example, try the recent linux
> kernel 2.5 build system against the old
> one, as yet in 2.4; The new one is based
> on new paradigm and is both faster and
> easier to achieve more consistency.

Which has very little to do with it being monolithic, and a great deal to do with it being better designed. This is a real life example of the sort of confusion to be found in that article.

If you take a system, and change A and B, and it works better afterwards, that does not mean that changing A made it work better.

21 Jun 2003 17:17 kodgehopper

Dont mind it as a user, Hate it as a developer
I've used autoconf as both a user and developer. As a user I think it's excellent most of the time, and does what it's supposed to. I personally always look for a ./configure script cos if it exists, it generally means I'm not gonna be spending my afternoon hacking Makefiles or trying to figure out library dependency problems from gcc error messages; ./configure normally discovers those problems for me and gives me the opportunity to fix them very quickly.

I also do my fair share of coding, and like Jane from the article, my experience with autoconf was less than pleasant. In fact, I agree with all the problems mentioned for developers. Eventually, I tend to just write my own Makefiles, and then modify them ever so slightly to fit the autoconf system. I just havent been able to wrap my brain around the whole system, and after a while I just stopped caring. The documentation is terrible, even the autoconf, automake and libtool book didnt quite help me much. The autoconf concepts were just a little too obscure. And I still dont know how automake works. Still, I figured I was in the minority since most other developers create pretty decent configure scripts.

Regarding the author's view of user problems, I have to disagree with most of them. If you WANT to get creative and install in /opt (which personally I hate, that's what /usr/local is there for), then be prepared to deal with the consequences. Additional configuration will most likely be needed, but that's not because autoconf is flawed, it's because /opt is evil. Stick with standard paths and many of the other complaints fall away.

Naming conventions such as --enable-XXX is certainly beneficial to the overall ease of use, but I can't say I've had too many hassles since a ./configure --help usually provides meaningful help, and options are then pretty easy to find. Personally, I wouldn't even classify this as a problem.

I've manually updated autoconf, perhaps 3 times in the 5 years I've been using Linux. I have not manually updated autoconf in at least 2 years, I just use the version that comes with my distribution. In all fairness, I upgrade my distribution about once a year. The whole "your autoconf is 3 weeks old, please upgrade" is just nonsense.

The claim that autoconf doesnt properly find header files has some merit, but this is not entirely an autoconf problem. "configure" searches the standard header locations. If application X stores it's header files in /usr/include/X/, autoconf can't magically be expected to know that. Perhaps some central database of header files and other information needs to be implemented, or much more simply, developers of these libraries need to follow the example set by tools like gtk-config where the that utility keeps track of what's installed where. This problem then, is not a failing of autoconf and friends; it does very well with the information it has.

Personally, I think Joe needs to use the shotgun. A seasoned Unix user would have realized that the Makefile generated by configure is not meant to be human-friendly, it's just meant to work. That's the whole point of the system. Who cares if it's readable by a human? I use flex and bison, and while those tools are invaluable at times, I sure as hell dont go hacking the generated C code. Joe just seems like the sort of person who enjoys skiing uphill.

Overall, as a user, the autoconf system rocks. As a developer, it sucks, and it needs to be made more intuitive.

And if this comment turns out longer than I expected, It's cos I'm seriously bored and honestly have nothing better to do right now.

21 Jun 2003 18:41 BlueLightning

Re: Definitely
Well the difference is that 99% of distributions
include Bash, not quite as many include Python
(though a lot do). I have just taken a look at
SCons and I have to say it looks pretty good.
Unfortunately though, until KDevelop supports it I
can't really use it in my project.

21 Jun 2003 18:52 BlueLightning

Re: Author obviously misunderstands

> Um. It's a makefile. It
> declares variables and rules. Very
> straightforward.


If you think the makefiles that automake produces
are straightforward then you're probably the sort
of person that would create the crime that is the
sendmail config file. Automake-produced makefiles
are a complete mess - there is no reason why they
have to be as complicated as they are.

21 Jun 2003 19:54 reduz

Re: Author obviously misunderstands

>
> Most of the distributions I worked with
> (Debian, RedHat, SuSE) seem to follow a
> standard, called FHS. The BSDs usually
> follow their own policy, which is
> slightly different, but not too much.
>


No, FHS is a totally ambiguous thing, and most distros
interpret it as they wish. It is not fair to call it a "Standard"
.Debian does everything in /usr, redhat has /usr and
/usr/local, suse and many unixes have /opt, and MacOSX has
none of the above.


> % And If you like to rtfm, thats your
> % problem, we know that with
> % enough time one can figure out
> anything,
> % but you seem to be
> % missing the point on this.
>



> No, he doesn't. If you do not spend the
> time to learn a tool, you will not
> understand it. That simple.
>


you and him do miss the point, and the point is that
not everyone wishes to lose long amounts of time learning
something, because of it being overly convoluted, complicated
and poorly documented.



> I wonder what documentation you refer
> to. I found both the autobook and the
> autoconf / automake info docs good, and
> autoconf quite intuitive, especially for
> the user.
>


this article and the rest of the posts here are talking about
the developer standpoint mostly.

21 Jun 2003 20:06 noda132

Re: Author obviously misunderstands

> % Um. It's a makefile. It
> % declares variables and rules. Very
> % straightforward.
>
>
> If you think the makefiles that automake
> produces
> are straightforward then you're probably
> the sort
> of person that would create the crime
> that is the
> sendmail config file. Automake-produced
> makefiles
> are a complete mess - there is no reason
> why they
> have to be as complicated as they are.

Gotta defend myself on this one: Is there any reason for them to be any simpler? They will naturally be enormous things because of all the generic logic which is used -- the kind of things auto* were made for in the first place. I won't try and argue that they cannot possibly be smaller, but I have to wonder why it matters.

Who cares how big they are? They *are* logical. Not only that, but every one behaves the exact same way. Variables first, then rules. The rules are always the same: "distclean," "all," "uninstall," "install..." you don't even have to read them because they don't change! And it's a very bad point to bring up in a debate against auto* because the Makefile is not one but two levels deeper than what anybody ever should edit. You're not supposed to edit the Makefile. You're not even supposed to edit configure or Makefile.in. Developers and users alike should trust that these files contain the logic the developers described in the auto* input files; if they don't, that's bug which can be overcome without rants, like any software bug.

It seems to me Joe User is someone who believes he understands what tools do but is misinformed or "figured it out" for himself without asking anyone else. This is fine, I'm sure about 10% of people reading this comment have been there themselves. But it seems to me rather odd that Joe User would then bother to write such a misinformed article.

Many developers have valid concerns about auto*. The author has somehow managed to miss them, though.

21 Jun 2003 20:14 noda132

Re: There is no need for auto*

> And, I don't like programs thar are tied
> to their location upon compilation -
> this shouldn't be. Better to use
> relative paths - relative to program
> location or so.

Distro makers must hate you then: ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var

21 Jun 2003 21:11 Netzapper

Newbie Question
Wouldn't it be possible (and possibly superior) to create a tool that profiles a system once, and then at installation/configuration time simply fills in variables based on the values held in a database? Then, as the new program is installed, an entry is added to the database saying that "libfnord-2.3.5" is now installed in "/usr/lib/libfnord/".

I know that not everyone would have said program, but for those who have it, would it not render the configuration process obsolete?

21 Jun 2003 21:24 FirstTiger

No ones mentioned qmake?
I agree that the whole business of building software
from source, and coping with the dependency
disaster, and coping with the differences between
platforms, is fundamentally broken.

The two best things the FSF ever did are gcc and
gtar. The two worst are their texinfo documentation
and the auto* tools.

I like the approach (and yes, I realise it's pretty
much a Qt specific thing) taken by qmake. It seems
to solve much the same problems as auto*, but
without the pointless complexity.

It's simple and it works.

Granted, it doesn't try and solve the cross platform
issue - that's palmed off to the qt library. Maybe
that's the problem with auto* - it tries to solve too
many problems at once.

Perhaps we should break *the problem* into three
parts:

1. Cross platform compatibility
2. Building the software
3. Managing dependencies

I don't know.... just my 1.5 cents worth...

21 Jun 2003 22:31 Avatar Ullerup

Re: Author obviously misunderstands

> He again does the typical thing and runs "./configure
> --prefix=/opt". Not to say the author has no experience
> with Linux, but this is not typical; in fact, it's wrong.
> There is a standard for installation prefixes.

Nonsense. It's not wrong. Atypical maybe but not wrong. A person can administrate their system however they please. If the sysadmin wants to compile software and install it into "/mystuff" then he's free to do so. There's no requirement for following those conventions. I use /opt myself inplace of /usr/local when installing software on my systems.

21 Jun 2003 22:35 dakoda

Re: auto* work fine

> projects. As you said, a well written
> configure script is a thing of wonder.

I'd have to mostly agree, most of the autoconf nghtmare things are from very vey poorly written configure scripts and/or makefiles. configure should never need to run more than once (should never happen in a makefile), among other stupid things developers often seem to do (this one has caused me to most problems).

to me, it almost seems like some projects put too much effort into their configure.. it _doesn't_ need to do all that much; check some headers, some library versions, and then some system stuff (type sizes etc), and then you should have essentially all that most projects will ever need.

21 Jun 2003 23:41 phraggle

agreed
I cannot stress how far I agree with this article. I have been saying to people for the past year at least that the whole auto* system is a horrible horrible mess.

I tend to find myself spending several hours for each new package I make, just setting up the build system. Once it works, it works fairly well, but it is quite simply way too complicated to set up. To be quite blatant I should NOT be spending HOURS settings this up.

I think part of the problem is that the whole system seems to be built upon layers on layers of hacks. Autoconf wasnt quite enough, so they wrote automake. Then autoheader. Autom4te. Repeat ad infinitum.

I dont think the idea of shell scripts that generate makefiles is neccesarily a bad one, but it would be nice if the whole system could be rewritten in a coherent form, preferably using sane tools (who the hell uses m4?).

It would also be nice if there was a GUI option so less experienced people could install without having to use the console.

21 Jun 2003 23:45 phraggle

Re: Newbie Question
This is what pkg-config does - see freedesktop.org. However, it doesnt totally solve the problem as auto* does more than just finding libraries. Definitely pkg-config is a good thing, though.

21 Jun 2003 23:45 Avatar yanestra

Re: Author obviously misunderstands

> Let's go through my issues with this one
> at a time...
> He again does the typical thing and runs
> "./configure --prefix=/opt". Not to say
> the author has no experience with Linux,
> but this is not typical; in fact, it's
> wrong. There is a standard for
> installation prefixes.

I don't know what FHS has to do with all those problems with configure. Anyway, FHS is an option; it breaks several older traditions and therefore can be considered meaningless, depending on your distribution.

21 Jun 2003 23:55 phraggle

Re: A different opinion.

> It took me like two
> days to fully understand an average
> Makefile generated by automake 1.4.

This pretty much sums up my feelings on what is wrong with auto*.

22 Jun 2003 06:40 polesapart

Re: Improvements

>
>
>
> Traversing a huge DAG for every command
> is clearly a loss. Entering a directory
> and parsing a small makefile is very
> likely to take less time than this.
>

That's very relative, it depends on the proportions; I have been doing some benchmarks for a while. For example, on a tree with about 150 subdirectories (in fact, it's a modified version of wine), with the standard make file system (a recursive one, not automake based), it takes on my machine about 11.5 seconds, not accounting disk access times, to traverse all the directories and considering all targets). The timings are the average of 6 consecutive runs, disregarding the very first (So that disk access doesn't account, since I have more than enough memory for caching). With a modified build system I'm still working on, this time is down to about 2.3 seconds, in average. With disk access accounted, it's about 19.0 and 10.2 seconds, respectively. It's a substantial difference only for a simple task, if you consider it in percentage.

> Nix. I've never had trouble doing this
> for complex trees, with or without
> automake. You just have to arrange your
> tree in a manner that reflects the
> dependencies. This is not a particularly
> limiting constraint; it's actually
> something you should be doing anyway, to
> make the code easier to navigate. There
> should not be any circular dependencies
> in your source tree, inner directories
> should not depend on things at a higher
> level, and so on.

Well, even in some questionable build systems, circular dependencies are considered a bad strategy and eventually a conceptual error, so you're talking nothing practical here.
And about the depth level, you don't need to dig on that to get things far from optimized. The target dependency (as seem by make itself) is at least incomplete, as you can see on the paper's example on page four. By the way, automake developers have fully understood this, and implemented this approach as an alternative, as you can see on the chapter named alternatives (or something) of the manual.


> Which has very little to do with it
> being monolithic, and a great deal to do
> with it being better designed. This is a
> real life example of the sort of
> confusion to be found in that article.
>

Perhaps you should do some empirical checking before considering theory? At first, linux 2.5 build system is not monolithic, in the sense that it's not composed by a single makefile, it still has a main makefile which then consider the subdirs ones. It does that before starting making the targets, and not a dir at a time, which allows make to constructing an entire view of the build tree at once. When it needs to rebuild a target, it just does that, instead of being blind and checking dir by dir when there's no target to rebuild on most of them, and parsing and parsing makefiles for then doing just that.
The main gain is not for single time builds, but for developers, which need to rebuild the entire kernel with sparse changes here and there.

> If you take a system, and change A and
> B, and it works better afterwards, that
> does not mean that changing A made it
> work better.
>

I never said it did. It has improvement on other areas too, but it's just complimentary on what concerns the focus of the discution. I have benchmarked against previous 2.5 versions, before of the build system change, but I believe timing gain from rebuilding is not the main point here, it make as is simplified dependency checking and parallelism (When i have to build it on a 4-way system I take that into account) in way that is both simple and better.

But let's consider it all as it if it were only experimental theory without even a single testing or probability of success, experimenting is good and in most cases the way to people to find out if this is worth something or not, right? If you don't want to try, you don't need to, it's a matter of personal taste. There are people out there still using Imakefiles, and I won't try to change their minds, even why, if Imakefiles suffices, with i'ts good and worses, they just probably won't feel tempted to change anyway. But starting from this point to try to say "anything else than Imakefile is just confuse" is kind of anti-scientific. If someone want to try to prove to others something new is something worse or better, it should do that with facts and data, not personal taste. I believe to have such data only enough to convince myself, I'm just pointing alternatives and relating my experiments with build systems, which come from some years. I still use my recursive make file based build system on some old projects, for the fact that I won't change them if simple enough to hold fine with the old approach; For many years I have used autoconf/automake too, in some cases for good, but not always. My current experiments have good results, but if something else appears, I would be glad to test.

And what concernes automake's usage, it could be made in a way that would be quite transparent to the project maintainer and developers, so it's no matter of worry for those who prefer the older method just because of that.

My two cents.

22 Jun 2003 06:59 polesapart

Re: New solutions thats what we need.

> [snip]
> Sorry for the rant about things braking.
> However by now someone should have found
> a real solution and not just reinvented
> the wheel. Since lots of software today
> uses autoconf, automake, auto.... It
> makes sense to improve things there and
> not just switch to something else that
> may or may not be better. If we can't
> get the old stuff right, what hope is
> there for anything new?
%

It happens frequently in the open source world, that when some project shows some limitation, someone forks it or start another one from scratch to achieve exactly the same goal, only by slightly different means. In some cases that results in a win, but on others, it's just a waste of time and effort. I don't like to even thing on the idea of restrict people freedom from starting things over and over or improving, after all, the wheel would still be square if all people were limited to only think a way or other. In the case of auto* tools, my vote would be to improving it, after all, it has it's good points. I said on another post some of the points I believe worth attention, but certainly there are others. I don't see any of the points such as using M4 and plain shell as weak ones, I thing that M4 is rather easy to use, as opposed as the article author says. And for the joe user too, consider writing a sendmail configuration profile for parsing with m4 or editing sendmail.cf by hand (which is nonsense because you lose your changes if you have to upgrade).

There's room for other build systems, but some sort of standardization on configure's parameters, some speed improvements and better consistency, followed by an user-intuitive way of managing a project tree would make it a hell of a tool.

> Right now I would like to suggest
> creating a powerful gui app capable of
> using the existing autotools and
> compensating for thier shortcomings...
>

I vote for a gui, but I believe it should not be created to compensate for current deficiences, instead for managing the entire project while simultaneously the auto* is fixed and improved.

> Of course I am probably wrong and lame
> for even suggesting solving the
> shortcomings of the existing autotools
> and development processes.
>

I don't think you are. I wouldn't use auto* for every kind of project, in some cases it stands in the way instead of helping, but it worths the effort of improvement! :-)

> Heck at least linux is cleaner and more
> stable than windows.

Windows? I forgot what is that about :-P

22 Jun 2003 07:31 caffineehacker

Re: problem exists, regardless of whom is to blame

> just to get
> make to see /usr/local/include, there is

Ummm, it's called /etc/ld.so.conf, use it sometime

22 Jun 2003 07:34 caffineehacker

Re: Newbie Question
You're also getting into the massive registry problems in windows. What happens when the uninstaller is broken or I just delete the program, then all hell rains down.

22 Jun 2003 07:40 caffineehacker

Re: MPlayer core team also hates autosh*t
The problem now is that you are spending hours writting those many 30 lined Makefiles. My Makefile.am's are only about 5 lines each, that's not too much time to write. My configure.in is a little longer, but it still took less time then it would've taken to write a full configure script myself.

22 Jun 2003 12:30 yeti2

Re: You're dead on

> You must have automake 1.6 or 1.7
> installed to compile The GIMP.
> blablaba...

This is FUD. No one has to have autoconf or automake installed to *compile* things. Either you don't know a s... about the auto* tools or you are a troll. If you see message like this, then someone made a terrible mistake (maybe you) and it worths a bugerport (if it wasn't you who made the mistake).

22 Jun 2003 12:35 yeti2

Re: What about ant?

> You know you hit it on the head. Part
> of the reason why I have given up on C
> and C++ is because the build process.
> There the Java people did it right!

By hiding the pain of installing software into installation of the right j2sdk1.4.0_10_20_30.5_released_in_the_right_moon_phase othewrise the silly thing won't work...

Really great.

22 Jun 2003 13:36 yeti2

A coindience?
Most ranting comes from people who don't understand the auto* tools (proving that by writing things like you have to have automake installed to compile things or you have to know m4 to create autoconf macros) and who don't remember how software distribution looked like before autoconf and automake. Their posts reveal a primitive fear of unknown -- while one can emphatize with them, fear is not an argument. They would probably say the same anything complex they would have to learn. And creating portable C programs is inherently complex.

So you can use some HLL like Python or Java and let the language creators and porters do most of the dirty work for you. This limits the range of platforms the program will run on and the range of programs you can write -- well, you *can* write a network monitoring utility in Java, but then you can't run it on your i486 router. But of course it allows build systems with much more built-in assumptions, IOW much simplier.

Most of the other proposed solutions are no solutions at all. Trying to solve portability by seriously limiting the number of platforms is silly.

Binary packages work only in a very uniform environment (in fact, it doesn't completely work even on MS-Windows), people simply need to compile programs on not-so-standard systems.

Make replacements are nice, but I want to compile my program on IRIX, SunOS, and who-knows-what. I don't have any make replacements there.

The same holds for configure systems written in Python, or Perl, or whatever. I love Python and get along well with Perl, but too often I write a Python script and then find it doesn't run on Debian Woody, because of too old Python (not speaking about other Unices). The same for Perl. So where's the portability?

So after all, a portable build system has to use what is available -- make, shell, sed, and such simple stuff. It doesn't have to look exactly like the GNU one, but it *will* be complex and it *will* use some kind of expansion of higher-level constructs to the simple ones available everywhere.

I'm one of the of people who learned the GNU build system and I think it's great. And yes, it is because I do understand it. Except for the badly written ones, all programs using it build essentially the same way, have the same means of enabling and disabling optional stuff, Makefiles have the same targets, etc.

Of course, there are issues. So go ahead and fix them.

22 Jun 2003 20:43 caffineehacker

Before Posting Learn
For those of you looking for a good tutorial or information about making a Makefile and configure script using auto* tools should go here www2.dystance.net:8080... (www2.dystance.net:8080...)

It is a great tutorial and will teach you all you need to know to understand how great a system these auto* tools really are. Don't be scared by the length either, the first section really gives you all you need to know for a simple program and then you can skim and add the pieces you need. It's not hard so read before posting,
~Tim~

22 Jun 2003 20:48 caffineehacker

Re: problem exists, regardless of whom is to blame
Oops, I appologize for that, I meant C_FLAGS/CXX_FLAGS. Just set it to -I/usr/local/include and you're good.

22 Jun 2003 20:53 caffineehacker

Re: Author obviously misunderstands
Ahhh, you're post is making everyone green, I'm fixing it, remember to end quotes

22 Jun 2003 22:21 e8johan

Re: No ones mentioned qmake?

> Granted, it doesn't try and solve the
> cross platform
> issue - that's palmed off to the qt
> library. Maybe
> that's the problem with auto* - it tries
> to solve too
> many problems at once.

I'd say that it does. It even allows one to treat different platforms differently in one project file.

The solution with system description files that qmake uses make it even greater. It you change something, just hack that one file and all qmake-dependant projects will work!

22 Jun 2003 22:23 merlin262

Re: You're dead on

>
> % You must have automake 1.6 or 1.7
> % installed to compile The GIMP.
> % blablaba...
>
>
> This is FUD. No one has to have autoconf
> or automake installed to *compile*
> things. Either you don't know a s...
> about the auto* tools or you are a
> troll. If you see message like this,
> then someone made a terrible mistake
> (maybe you) and it worths a bugerport
> (if it wasn't you who made the mistake).

You assume that the author of the software has
a properly written configure script. If you've
built a lot of software from source, you'll find that
several software packages DO require you to
have the auto tools installed. There are a variety of
reasons for this, some obvious and some not
so obvious. (Several packages do not
ship with a configure script at all, but instead
a "bootstrap" script to generate it.) There
are also packages that are distributed that
include links to the auto* tools instead of having
local copies. These might be problems with
packages, but they DO happen, and they
happen often enough to illustrate that many
developers lack an understanding of how
the autotools (should) work.

To demonstrate, run a google search for
"compile error old autoconf" and see what
turns up.

22 Jun 2003 22:24 merlin262

Re: Improvements
Hrm. What's with this all green stuff???

22 Jun 2003 23:27 philhoward

Re: Just get to the point

I'm one of those who knows sendmail from even before it used m4. For too many years, m4 was screwing up everything I was trying to do with sendmail, so I ended up having to code the awful sendmail.cf file by hand. But at least it worked as I intended. Maybe m4 is powerful, but it wasn't powerful enough for sendmail, and to overcome the weakness in the core macros that came with it.

My personaly opinion is m4 is an abomination contrary to nature.

A few years ago I started setting up a project out of a collection of functions I had been gathering. I spent 3 weeks, with online help, trying to get autoconf, automake, and libtool to work together and make a good Makefile. All the documentation seemed to spend its time covering issues I didn't have, and didn't cover (or hid any coverage of) the issues I did have. Some people tried to help online, but none were successful. Of course I can always just blame it on my project being too complex for it to handle. But I did get the feeling that m4 was big part of the problem.

I ended up coding my own configure script directly, and have rewritten it twice since then. It's fairly stable now, but I envision one more rewrite where most of it will come to be in C rather than the current bash. The only catch with doing that is I can't use all the nice functions the library it will be used to compile will provide, unless I embed an extra copy of them. You can be sure I didn't use m4 in any of this.

You can see it here on Freshmeat, called "libh".

22 Jun 2003 23:34 philhoward

Re: Improvements

I think the green stuff represents the infestation of autoconf growing over everything in its path.

22 Jun 2003 23:45 philhoward

Sometimes a local autoconf really is needed

Sometimes a local autoconf really is needed. One example of that is when merging in a patch from a different author to add a new feature. Such patches can't just include a new configure script, since that would conflict with another patch from another author. What you have to do is apply all the patches, which may even change configure.in, and if they do so, you have to re-run autoconf yourself. The problem here is you have a very narrow window of version compatibility that might be made even worse depending on the patches actually used. Given a number of different autoconf versions around, this isn't trivial. I've also found that when upgrading autoconf to accomodate one package, it broke yet another package. It just isn't as portable as we have been led to believe.

22 Jun 2003 23:51 philhoward

Re: Binaries & Why Makefiles in Python are bad...

Without compiling, how can you combine patches that add features coming from a couple of different authors?

23 Jun 2003 03:01 fremar

Re: Newbie Question
Also, I love being able to use another than the default version for testing and development. testing Gnome 2.2 in another directory and linking programs with that, without breaking the Gnome 1.4 that came with redhat 7.3. Until I'm satisfied with the new version and decide to switch over to it.
'configure' can do that. No need to even touch the makefile.in or whatever, just specifying a couple of prefix and other options. A central registry of installed libraries will most definetely not be able to do such a thing without just as much complexity as the auto* tools.

On the other hand, I agree with the author that badly written and broken auto* configurations can be terrible. But so are broken makefiles, broken headers or wrong assumptions found in the source of many packages.

23 Jun 2003 04:29 PhrozenSmoke

Re: A coindience?

> Most ranting comes from people who don't
> understand the auto* tools (proving that
> by writing things like you have to have
> automake installed to compile things or
> you have to know m4 to create autoconf
> macros) and who don't remember how
> software distribution looked like before
> autoconf and automake. Their posts
> reveal a primitive fear of unknown --
> while one can emphatize with them, fear
> is not an argument. They would probably
> say the same anything complex they would
> have to learn. And creating portable C
> programs is inherently complex.
>
> So you can use some HLL like Python or
> Java and let the language creators and
> porters do most of the dirty work for
> you. This limits the range of platforms
> the program will run on and the range of
> programs you can write -- well, you
> *can* write a network monitoring utility
> in Java, but then you can't run it on
> your i486 router. But of course it
> allows build systems with much more
> built-in assumptions, IOW much
> simplier.
>
> Most of the other proposed solutions are
> no solutions at all. Trying to solve
> portability by seriously limiting the
> number of platforms is silly.
>
> Binary packages work only in a very
> uniform environment (in fact, it doesn't
> completely work even on MS-Windows),
> people simply need to compile programs
> on not-so-standard systems.
>
> Make replacements are nice, but I want
> to compile my program on IRIX, SunOS,
> and who-knows-what. I don't have any
> make replacements there.
>
> The same holds for configure systems
> written in Python, or Perl, or whatever.
> I love Python and get along well with
> Perl, but too often I write a Python
> script and then find it doesn't run on
> Debian Woody, because of too old Python
> (not speaking about other Unices). The
> same for Perl. So where's the
> portability?
>
> So after all, a portable build system
> has to use what is available -- make,
> shell, sed, and such simple stuff. It
> doesn't have to look exactly like the
> GNU one, but it *will* be complex and it
> *will* use some kind of expansion of
> higher-level constructs to the simple
> ones available everywhere.
>
> I'm one of the of people who learned the
> GNU build system and I think it's great.
> And yes, it is because I do understand
> it. Except for the badly written ones,
> all programs using it build essentially
> the same way, have the same means of
> enabling and disabling optional stuff,
> Makefiles have the same targets, etc.
>
> Of course, there are issues. So go ahead
> and fix them.
>

I disagree with this guy completely, and agree with the author COMPLETELY. I write code in a variety of languages: Python, Java, C/C++, and the whole autoconf/Makefile process has become antiquated and barbaric when compared to newer installation methods available. The author says the 'make' process just uses what's available on the system: WRONG. The problem is when every single version of the 10,000+ dependencies for a program don't line up to a 'T', the 'make' process just whines and quits. The problem with the make process is all of the dependencies. You can easily spend more time installing packages for the 'make' process than installing packages for the PROGRAM you are trying to compile: First upgrade autoconf, automake, m4, libtool, some programs require assembly compilers like nasm - so upgrade that. When you spend more time 'getting ready to install' then installing and running the program, there is a problem. Who are these people trying to run network monitoring tools on i486 'routers'? Who is using i486 as a router anyway? AOL Keyword: UPGRADE. It sounds ridiculous to make an installation process unnecessarily complicated because some loser somewhere 'MIGHT' be trying to compile the package on some Commodore-64 somewhere in the backwoods of the Tundra. Chances are, the package may compile on such a system, but will not RUN well on such a system anyways. So, we can't sacrifice progress in the name of ridiculous amounts of backwards compatiblity. I am in favor of not only moving away from the 'make' process, but moving away from raw C/C++ programs altogether. It is much better for a programs to be written in programming languages that use 'wrapped' versions of the C libraries, programming languages such as Python, Perl, etc., because it ends up (in most cases) limiting the number of dependencies. It is much easier to upgrade Python or Perl ONCE, then have to upgrade autoconf-automake-m4-libtool-etc for every single package you want to compile. Also, 'wrapped' programming languages like Python introduce something that is really missing from C: ERROR CATCHING. Which is why you see so many C/C++ programs seg-faulting. It is stupid to use an 'unwrapped' version of a programming language that assumes ANYBODY is capable of writing a 'bug free' program. There is no such thing as a 'bug free' program, as you can only test for the PRESENCE of bugs, not their absence. So, programming languages with error catching greatly add to the stability of the program. (I have yet to see a Python or Perl program 'seg fault' unless the program was making calls to some third-party wrapped C/C++ library made with swig or the like.). A Python program's dependencies tend to be simple (needs version 2.2 of Python, or version 2.3, etc.) Aside from getting stuff like PyGtk or special GUI toolkits, Python programs don't whine as much about dependencies because Python and Perl include all the necessary versions of the basic libraries in their tool kits. So, with Python you don't get whining like "need to upgrade libxml", because if a program will run with a certain version of Python it will run with the XML libraries IN that version of Python. So, I would rather spend 40 minutes upgrading Python ONCE every year or so, then having to upgrade my whole 'make' configuration every 2 weeks. Also, the 'make' process is not user-friendly for new *nix users at all. And, personally, I think if we ever want to see more people choosing *nix platforms over Windows, we have to show them something other than a bunch of tech-geek junk scrolling up a console. While people will say "well newbies shouldn't compile source packages": That is stupid, because we ALL know many times there may be only 1 or 2 programs out there that do a special task that you need to do, but, often times, no distros have binary versions available, so the user has no choice but to grab the source packages and try to compile it. (Though, if it was written in Python, Perl, Java, etc. it could be run immediately and not have to be compiled.) The same people that are saying "if you whine about the GNU make system, you have a fear of learning", have a fear of CHANGE. The time has come to move to higher-level Fourth Generation programming languages and more civilized installation processes, instead of trying to 'live in the past' and hang onto programming languages that have served their purpose. Their is nothing wrong with writing a program that relies on 'wrapped' versions of C in a language like Python/Perl, etc, because your Python program inherits the stability of the Python base. I've noticed that alot of people in the open source community love having FREE software available, but have a FEAR of using someone else's knowledge. Instead, they try to reinvent the wheel, writing C programs from scratch that 'seg fault' for all the same reasons 40,000+ C programs have seg-faulted. That's stupid when you can use Perl or Python with 'wrapped' C code that has protection against millions of known bugs, glitches, and common mistakes (and, yes, I admit I'm human and my intelligence is not flawless) and present the user with an application that is overall more stable than a raw C application. In Python, my errors are CAUGHT, not "core dumped" with an unexpected QUIT, scaring the hell out of any newbie (and, yes, I do make ERRORS- I am human and admit it. Do all of you?) Critical errors in raw C programs leave you with exactly one option when the program is running: QUIT. (Even the nice seg-fault catchers in Gnome and Kde lead to the program having to QUIT.) That's really stupid: sorry. If you used wrapped Python, Perl, or Java code - that 'wraps' the C-library, your program can continue anyways - working around the problem however you code it do so. The result: My Python programs get more positive feedback than 'bug reports'. The whole point of writing programs is to create something useful, not keep 'gdb' busy debugging core dumps. When you own up to being human, you will understand the value of using higher level languages that 'wrap' the C code, instead of writing raw C programs. It's not the quantity of programs you write, it's the QUALITY. Truth be told, a lot of open source programs written in C seem great because they are free, but the 'seg faults' and 'core dumps' caused by uncaught errors (and people trying to re-invent the wheel by coding C programs from scratch rather than constructing from a stable code-base) make a lot of open source programs not worth the tar-ball they are packaged in. Sometimes you have to put the 'do it myself from scratch' pride thing aside and start the development from a stable code base - even if it is somebody else's- like the Python, Perl, Java code base etc. No, you won't have the 'glory' of saying "I coded all of this in C...from scratch", but the program will actually WORK with way fewer bugs. By the way, BUGS also limit the portability of programs, and C/C++ programs using the 'make' process have PLENTY of them for everybody.

23 Jun 2003 07:03 pfremy

Comments and alternatives
- the author is gentle: he forgot to mention the issue with configure: imagine you type
./configure --with-qt-lib=/opt/my_qt

Configure runs and stops because he can't find Qt. Indeed, the right option with --with-qt-libs=/opt/my_qt (notice the s after lib). But why did this stupid program not check his arguments ?

Even worse, if there is a standard qt, configure will use it and I won't notice that is is not using my special version of qt because of a trivial spelling error.

- other issue: the user is a clever user and wants to separate its build tree from the sources, for example to have multiple builds with multiple options (this guy is a masochist). So he creates a directory my_build, cd to it and then runs ../configure . 50% of the free software will fail with this although configure has been thought out to handle this case.

- tmake from Trolltech tries to bite the problem from the other hand: it generates Makefiles based on a set of platform specific templates. What I like with it is that it is very simple to use, and can generate msvc (yes, I need that) project files. It is not extraordinalrily good but sufficient for my needs. It misses a few dependancies. I really regret that Trolltech has switched to a cpp based version in Qt3, the perl version was easier to hack.

- boost.org uses its own make, bjam, which looks to me even more complicated than autoconf/automake. Like tmake, it is based on a set of platform-specific templates.

- Java has its ant thing.

- autoconf/automake is even more fun with projects like KDE, where every possible hack has been used to generate extra-stuff and to patch the generated makefiles. Only two people in KDE really understand the complete makefile generation process. There is a python script in KDE, unsermake, that should replace part of automake (or is it autoconf ?)

Anyway, the author is 100% right. Automake/autoconf is an horror that only has one quality for it: it does not work too bad.

23 Jun 2003 07:30 asuffield

Re: problem exists, regardless of whom is to blame

> Oops, I appologize for that, I meant
> C_FLAGS/CXX_FLAGS. Just set it to
> -I/usr/local/include and you're good.

I recommend using C_INCLUDE_PATH and CPLUS_INCLUDE_PATH instead, they're more likely to work in all cases.

23 Jun 2003 07:30 asuffield

Re: A coindience?
I considered replying to this. Then I decided that the formatting and overuse of capitals spoke for themselves.

23 Jun 2003 07:46 waschk

Re: You're dead on

> checking for automake >= 1.6 ...
> You must have automake 1.6 or 1.7
> installed to compile The GIMP.
> Get
> ftp.gnu.org/pub/gnu/au...
> (or a newer version if it is
> available)
> checking for glib-gettextize >= 2.0.0
> ... yes (version 2.0.1)
> checking for intltool >= 0.17 ... yes
> (version 0.17)
>
> Please install/upgrade the missing tools
> and call me again.
>
> Of course, if you upgrade you break
> something else.

This isn't really a problem. On modern distributions like Mandrake or Debian you can install more than one automake and autoconf version. Then you can call the required version in your makefiles our autogen.sh script, e.g. automake-1.4 or automake-1.7.

23 Jun 2003 07:53 nedric

Re: Binaries & Why Makefiles in Python are bad...

> A regular user should never be required to
> compile the
> software. It's only practical for small
> apps anyway. Who
> wants to compile KDE, or Mozilla, or
> OpenOffice?

The answer: every single gentoo (and slackware) user does. It takes time, but a huge portion of users are willing to do this for a from-scratch system. Don't insult users by suggesting that they keep their hands clean. This very fact is a large reason for improving/replacing the auto* system.

23 Jun 2003 07:59 asuffield

Re: Improvements

Freshmeat sucks, do not leave quotes open. The preview is there for a reason.

> % Traversing a huge DAG for every
> command
> % is clearly a loss. Entering a
> directory
> % and parsing a small makefile is very
> % likely to take less time than this.
> %
>
> That's very relative, it depends on the
> proportions

Congratulations, you have managed to reiterate my point.

> I have been doing some
> benchmarks for a while. For example, on
> a tree with about 150 subdirectories (in
> fact, it's a modified version of wine),
> with the standard make file system (a
> recursive one, not automake based), it
> takes on my machine about 11.5 seconds,
> not accounting disk access times, to
> traverse all the directories and
> considering all targets). The timings
> are the average of 6 consecutive runs,
> disregarding the very first (So that
> disk access doesn't account, since I
> have more than enough memory for
> caching). With a modified build system
> I'm still working on, this time is down
> to about 2.3 seconds, in average. With
> disk access accounted, it's about 19.0
> and 10.2 seconds, respectively. It's a
> substantial difference only for a simple
> task, if you consider it in percentage.

That's a lot of numbers and no description of what you changed. As such they're just numbers.

> % Nix. I've never had trouble doing
> this
> % for complex trees, with or without
> % automake. You just have to arrange
> your
> % tree in a manner that reflects the
> % dependencies. This is not a
> particularly
> % limiting constraint; it's actually
> % something you should be doing anyway,
> to
> % make the code easier to navigate.
> There
> % should not be any circular
> dependencies
> % in your source tree, inner
> directories
> % should not depend on things at a
> higher
> % level, and so on.
>
> Well, even in some questionable build
> systems, circular dependencies are
> considered a bad strategy and eventually
> a conceptual error, so you're talking
> nothing practical here.
> And about the depth level, you don't
> need to dig on that to get things far
> from optimized.

Parse error. What are you talking about?

> The target dependency
> (as seem by make itself) is at least
> incomplete, as you can see on the
> paper's example on page four.

The example given has been deliberately broken by the author of the paper. The simple fix is to change the top-level makefile (on page 3) so that MODULES is defined to be "bee ant" instead of "ant bee", whereupon everything works as expected.

This is a good example of the sort of confusing and misleading things that can be found in that paper. He starts with a contrived problem, then jumps to monolithic makefiles as a solution without considering any others.

> % Which has very little to do with it
> % being monolithic, and a great deal to
> do
> % with it being better designed. This is
> a
> % real life example of the sort of
> % confusion to be found in that
> article.
> %
>
>
> Perhaps you should do some empirical
> checking before considering theory? At
> first, linux 2.5 build system is not
> monolithic, in the sense that it's not
> composed by a single makefile, it still
> has a main makefile which then consider
> the subdirs ones. It does that before
> starting making the targets, and not a
> dir at a time, which allows make to
> constructing an entire view of the build
> tree at once.

You mean it's monolithic. The fact that it is split up into multiple files in different directories has nothing to do with this; it loads them all into one giant DAG.

> When it needs to rebuild a
> target, it just does that, instead of
> being blind and checking dir by dir when
> there's no target to rebuild on most of
> them, and parsing and parsing makefiles
> for then doing just that.

And instead of parsing a group of smaller makefiles, it traverses the entire DAG for every operation.

> The main gain is not for single time
> builds, but for developers, which need
> to rebuild the entire kernel with sparse
> changes here and there.

And the main loss is also for developers, who often want to rebuild components of the tree without traversing the entire graph.

> % If you take a system, and change A
> and
> % B, and it works better afterwards,
> that
> % does not mean that changing A made it
> % work better.
> %
>
>
> I never said it did.

Your logic was "Linux 2.5 is monolithic, and it is faster, therefore monolithic makefiles are faster". This is wrong because other things were changed at the same time.

> It has improvement
> on other areas too, but it's just
> complimentary on what concerns the focus
> of the discution.

And this is just plain wrong. It's not "just complimentary", whatever that is supposed to mean, it's a biased experiment. That means any results it gives tell us nothing about the differences between recursive and monolithic makefiles.

> I have benchmarked
> against previous 2.5 versions, before of
> the build system change

You mean you've performed the biased experiment.

> but I believe
> timing gain from rebuilding is not the
> main point here, it make as is
> simplified dependency checking and
> parallelism (When i have to build it on
> a 4-way system I take that into account)
> in way that is both simple and better.
>
> But let's consider it all as it if it
> were only experimental theory without
> even a single testing or probability of
> success, experimenting is good and in
> most cases the way to people to find out
> if this is worth something or not,
> right? If you don't want to try, you
> don't need to, it's a matter of personal
> taste. There are people out there still
> using Imakefiles, and I won't try to
> change their minds, even why, if
> Imakefiles suffices, with i'ts good and
> worses, they just probably won't feel
> tempted to change anyway. But starting
> from this point to try to say "anything
> else than Imakefile is just confuse" is
> kind of anti-scientific. If someone want
> to try to prove to others something new
> is something worse or better, it should
> do that with facts and data, not
> personal taste. I believe to have such
> data only enough to convince myself, I'm
> just pointing alternatives and relating
> my experiments with build systems, which
> come from some years. I still use my
> recursive make file based build system
> on some old projects, for the fact that
> I won't change them if simple enough to
> hold fine with the old approach; For
> many years I have used autoconf/automake
> too, in some cases for good, but not
> always. My current experiments have good
> results, but if something else appears,
> I would be glad to test.

This was incomprehensible, but appears to be handwaving combined with an appeal to emotion.

> And what concernes automake's usage, it
> could be made in a way that would be
> quite transparent to the project
> maintainer and developers, so it's no
> matter of worry for those who prefer the
> older method just because of that.

It could, but there is little evidence that it should.

23 Jun 2003 08:03 asuffield

Re: Binaries & Why Makefiles in Python are bad...

>
> % A regular user should never be
> required to
> % compile the
> % software. It's only practical for
> small
> % apps anyway. Who
> % wants to compile KDE, or Mozilla, or
> % OpenOffice?
>
>
> The answer: every single gentoo (and
> slackware) user does. It takes time,
> but a huge portion of users are willing
> to do this for a from-scratch system.
> Don't insult users by suggesting that
> they keep their hands clean.

I'll insult them by calling them idiots instead.

23 Jun 2003 08:07 asuffield

Re: What about ant?

>
> % ant.apache.org/
> %
> % I think a new auto{...} is need but
> it
> % should be developed using something
> like
> % RFCs, shoud be lightweight (ant
> isn't).
>
>
> You know you hit it on the head. Part
> of the reason why I have given up on C
> and C++ is because the build process.
> There the Java people did it right!

Building java is *easy* because there is only one java platform being commonly used. This lack of diversity is a serious problem, because this java platform happens to suck.

23 Jun 2003 09:21 stevenknight

Re: A coindience?

> The same holds for configure systems
> written in Python, or Perl, or whatever.
> I love Python and get along well with
> Perl, but too often I write a Python
> script and then find it doesn't run on
> Debian Woody, because of too old Python
> (not speaking about other Unices). The
> same for Perl. So where's the
> portability?

We implement SCons internals in Python 1.5.2, precisely to minimize this sort of portability issue.

The issue of SCons not being implemented in a more "modern" version of Python comes up periodically, but the advantages of delivering something that works on just about every installed Python out there outweigh any small benefit we'd get from being able to use Python 2.x language features in the internal implementation. So we're probably going to stick with the lowest-common-denominator 1.5.2 release for quite a while. (You can still use Python 2.x in your SCons configuration scripts if you wish, at the expense of your own build's portability.)

Similarly, we'll be adding (soon, I hope) a pre-built package with a pre-built executable that will not require separate installation of Python to use successfully.

This is all part of our philosophy of trying to deliver a build tool that works as well as possible for as many platforms and people as possible out of the box. It's not impossible to do this just because you use a scripting language like Python, it just takes a little more work on the part of those of us implementing the build tool.

23 Jun 2003 09:46 stevenknight

SCons stability, autoconf functionality
Two such solutions are "SCons" and "Cons". Both try to replace Make with something far more flexible. However, they both depend on tools which are a bit less standard than sh and m4: Perl and Python. Still, both languages are quickly becoming standard on the major Unixes. SCons is a bit less mature, but is preferred by the author of this paper. Using either allows a much clearer build process for complicated software. There is no requirement to generate a set of build instructions from a template.

A few quick items of note about SCons:

SCons is actually a lot more stable than its alpha status would suggest. We've still been tweaking end-cases in the user interface, but the base functionality has been in production use for a lot of projects for the last year and a half. And we have a very large suite of regression tests that have been implemented incrementally with the code, so functionality tends to stay fixed once a bug has been identified. Plus, we're going to go to beta very soon, anyway.

Also, the functionality in SCons at this point is almost a superset of that in Cons classic, modulo a few smaller features, the absence of which doesn't seem get in too many users' way.

Last, the current version of SCons added integrated support for autoconf-like functionality, starting with the ability to search for header files and libraries. It also provides a framework for adding your own tests, so you're not restricted only to what's available in the tool itself. The next version will add the ability to check for specific functions and types, and may add config.h-like header file generation, too.

23 Jun 2003 11:06 kil3r

Re: A coindience?

> Most ranting comes from people who don't
> understand the auto* tools (proving that
> by writing things like you have to have
> automake installed to compile things or
> you have to know m4 to create autoconf
> macros) and who don't remember how
> software distribution looked like before
> autoconf and automake. Their posts
> reveal a primitive fear of unknown --
> while one can emphatize with them, fear
> is not an argument. They would probably
> say the same anything complex they would
> have to learn. And creating portable C
> programs is inherently complex.
>
> So you can use some HLL like Python or
> Java and let the language creators and
> porters do most of the dirty work for
> you. This limits the range of platforms
> the program will run on and the range of
> programs you can write -- well, you
> *can* write a network monitoring utility
> in Java, but then you can't run it on
> your i486 router. But of course it
> allows build systems with much more
> built-in assumptions, IOW much
> simplier.
>
> Most of the other proposed solutions are
> no solutions at all. Trying to solve
> portability by seriously limiting the
> number of platforms is silly.
>
> Binary packages work only in a very
> uniform environment (in fact, it doesn't
> completely work even on MS-Windows),
> people simply need to compile programs
> on not-so-standard systems.
>
> Make replacements are nice, but I want
> to compile my program on IRIX, SunOS,
> and who-knows-what. I don't have any
> make replacements there.
>
> The same holds for configure systems
> written in Python, or Perl, or whatever.
> I love Python and get along well with
> Perl, but too often I write a Python
> script and then find it doesn't run on
> Debian Woody, because of too old Python
> (not speaking about other Unices). The
> same for Perl. So where's the
> portability?
>
> So after all, a portable build system
> has to use what is available -- make,
> shell, sed, and such simple stuff. It
> doesn't have to look exactly like the
> GNU one, but it *will* be complex and it
> *will* use some kind of expansion of
> higher-level constructs to the simple
> ones available everywhere.
>
> I'm one of the of people who learned the
> GNU build system and I think it's great.
> And yes, it is because I do understand
> it. Except for the badly written ones,
> all programs using it build essentially
> the same way, have the same means of
> enabling and disabling optional stuff,
> Makefiles have the same targets, etc.
>
> Of course, there are issues. So go ahead
> and fix them.
>

I absolutely agree with you!
autoconf/automake is really the RIGHT way to do that thing.
IMHO it's insane that I'd have install python (which i personaly hate) and perl (which I like) or java (...) to build a software package.
I know that for all those RH folks Python or Perl is something as usuall like bash (which I replace by tcsh BTW ;), but for most platforms or environments it's absolutely undesirable to have them.
The autocont is designed with portability in Mind.
If the author of above article read autoconf documentation he'd probably found an answer why configure cannot check parameters at startup.

23 Jun 2003 11:17 kil3r

Re: Dont mind it as a user, Hate it as a developer

> Overall, as a user, the autoconf system
> rocks. As a developer, it sucks, and it
> needs to be made more intuitive.
>
> And if this comment turns out longer
> than I expected, It's cos I'm seriously
> bored and honestly have nothing better
> to do right now.

You'r absolutely right!
Just because of autoconf is not trivial for software developers it doesn't mean it's bad.
It's great for me (as a developer and user) anyway ;)

23 Jun 2003 11:56 stevenknight

Re: Improvements

>
> % And for Automake, It should impose a
> % turn to a non directory-recursive
> % approach. There is some work being
> done
> % on that area, but seems to me that
> it's
> % more like a 'we support that as an
> % alternative' idea. That would result
> in
> % a several times faster builder, but
> % better yet, a consistent one, read:
> %
> %
> www.pcug.org.au/~mille...
> %
> % For some problems using recursive
> make
> % file systems.
>
> I have found this article to be
> confusing and misleading. It discusses
> several common problems with makefiles,
> most of which are not related to using
> recursive makefiles. Putting everything
> into one giant makefile means that make
> has to parse and process a huge DAG for
> every operation; this can make things
> slower, not faster. I have never seen
> convincing evidence that monolithic
> makefiles are inherantly "better",
> although you may be able to construct
> *some* cases where they run faster
> (trivial examples are likely to do
> this).

Real-world testimonial about the benefits of a global DAG:

Having a global view of the dependencies is really cool. Compare the couple of seconds of waiting with the need to "make clean; make" or "make depend; make." I wasted so much time before because I forgot about certain dependencies, resulting in inconsistent builds and strange bugs.

(This is from the project leader for the Computational Crystallography Toolbox, and he happened to mention it while talking about SCons, but it's really generic to any single-DAG build tool.)

The point is: yes, a global DAG may make any individual build take a little longer, which is what's sticking in your craw. Point taken. The benefit, though, comes from all of the problems you avoid by letting the build system take care of these things for you. And it's taking care of them using CPU cycles and disk accesses that are going to get faster with new generations of systems, so this part of your build will only get faster in the future.

What doesn't scale in the future, though, is the time you'll have to spend tracking down an introduced dependency problem and then reconfiguring your build to deal with it. Your counter-example to Miller's paper--just reorder the "bee" and "ant" projects--is itself contrived, because Miller is talking about the general problem of representing dependencies using order (or multiple passes). If order in your build system is necessary to represent all of the dependencies, then your build is fragile and will break as soon as someone checks in a change that breaks the order.

You could say, "Don't break the order," but it's just not necessarily that simple. Not every developer is sufficiently careful, and in a very large, complicated software system, it's likely that no one understands all of the order dependencies. So you end up with huge strings of -l options, or multiple passes through different targets, etc., that no one dares touch because no one really understands what will or won't break the build.

Now you could let the computer do the work of optimizing the build for you (a novel idea!), but if it's going to that, it needs an accurate representation of all of the dependencies--a global DAG. As you point out, it doesn't come free, but for lots of projects, especially large ones, the benefits far outweigh the cost.

Don't get me wrong--Miller's paper isn't flawless. For example, the Makefile-macro technique he uses for knitting together subsidiary Makefiles into a global DAG works fine (I've implemented it on a project), but it's not very extensible. You end up creating a lot of special-purpose macro conventions that have to get rewritten as your build assumptions change, new variants get added, etc.

But "Recursive Make Considered Harmful" is a seminal piece of work. Miller went back to first principals to investigate why increases in CPU and disk speeds over the last 20 years or so haven't sped up builds all that much, and identified a number of contributing factors that most everyone had been overlooking, including (but not limited to) incomplete dependency graphs. In my book, "RMCH" is required reading for anyone who needs to understand why building software correctly can be such a thorny problem.

23 Jun 2003 12:08 kil3r

Re: Yes, car crashes are the car's fault

> A configurator that requires a developer
> not to be lazy is deficient. It is a
> worthy goal to fix the configurator so
> that even a lazy developer can use it.
%

Lazy developers should never start writing the software.

> Maybe developers are setting up Autoconf
> wrong because it is poorly documented,
> or unnatural to use, or hard to learn.
> There's plenty of blame to go around.
%

Maybe the drivers are poor because of the car is poorly documanted, or unnatural to use, or hard to learn? Or (more likely) they didn't take enough effort to learn traffic regulations or didn't pay enough attention during the drving course?

23 Jun 2003 12:15 kil3r

Re: Yes, car crashes are the car's fault

> I'm a realist. When I see bad C code
> it's commonly intermingled with a shoddy
> ./configure script; I immediately think
> the developer is not experienced.
> Likewise, whenever I see a clean (i.e.,
> like 30 lines) Makefile.am and
> configure.in, there is elegant code
> alongside it. This is no coincidence.

Golden words, I must say!
When I was prepareing my first software for public release I took it as a poit of honour to learn autoconf and make the thing _the_right_way_[tm]! :)

I thought then: "If I could lern C I can learn autoconf". :)

23 Jun 2003 12:21 kil3r

Re: Author obviously misunderstands

>
> % He again does the typical thing and
> runs "./configure
> % --prefix=/opt". Not to say the author
> has no experience
> % with Linux, but this is not typical;
> in fact, it's wrong.
> % There is a standard for installation
> prefixes.
>
>
> Nonsense. It's not wrong. Atypical
> maybe but not wrong. A person can
> administrate their system however they
> please. If the sysadmin wants to
> compile software and install it into
> "/mystuff" then he's free to do so.

And if he runs into trouble then that was his choice.

If the administrator decides to remove the /var and change it to /myvar that is absolutely OK for me also. Go ahead!

23 Jun 2003 12:36 kil3r

Re: There is no need for auto*

> Just write portable programs. Portable
> program does _not_ need any auto* stuff
> - by definition :)
>
> I personally have no idea and I don't
> understand why do we need to support all
> this crap (ancient OS etc) which is in
> use only by 0.01% of all developers, but
> which is included in every auto* -
> "just in case".
>

Do you know why all public or government buildings have handicap support???
Because of CORRECTNESS!!!

If I can help someone else to compile my software on AIX or HP-UX or IRIX (which are not so rare anyway!) I'll do it without any question.
That's why I make a GNU (or generally speaking opensource) software so anyone cane use it!

23 Jun 2003 12:42 kil3r

Re: There is no need for auto*

>
> %If I want to install your package in
> /usr/local instead of /usr, or maybe in
> $HOME/local, what do I have to do to
> make it do so?
>
>
> make prefix=/blabla/ install
>
> And, I don't like programs thar are tied
> to their location upon compilation -
> this shouldn't be. Better to use
> relative paths - relative to program
> location or so.
>

And you run into security problems very quickly if you use only relative paths.

26 Jun 2003 05:04 Tadu

Re: MPlayer core team likes to swear endlessly

Autoconf is a good idea, but a very broken implementation.

It is a good implementation for what it needs to do.

Automake is just useless, gnu make is powerful enough to write short, efficient rules in it. No need to generate 50k+ files.

It makes boring tasks easier and helps in setting up the right dependencies. A Makefile.am typically is just a few lines.

libtool ... ehh. it should not been created. it IS the real nightmare, when comes to compatibility/versions.

You're right that it's not good enough, though it's getting there. You simply need it when you have shared libraries, despite it's problems.

I vote for hand-written ./configure scripts, it even works for such large projects like MPlayer!

It doesn't work. You just close your eyes on the problems. Edit a file in a subdirectory, type make, and it won't compile your changes. Cross-compilation doesn't work at all, standard autoconf options don't work, not everything is installed by default. You're re-inventing the wheel, poorly.

If you write it modular (using functions etc) then it's short

And? It's not any faster. Considering the code size of your pet project, the configure script isn't relevant at all.

Although a good standard for configure option naming and use is really needed.

Yes. That is why it exists. Just read the autoconf documentation.

26 Jun 2003 09:05 tmh

Re: Comments and alternatives

> - the author is gentle: he forgot to
> mention the issue with configure:
> imagine you type
> ./configure --with-qt-lib=/opt/my_qt
>
> Configure runs and stops because he
> can't find Qt. Indeed, the right option
> with --with-qt-libs=/opt/my_qt (notice
> the s after lib). But why did this
> stupid program not check his arguments
> ?

Maybe you should read the documentation sometime. Unknown options are silently passed to any other configure script run by the top level configure script.

Also there's no need to use such options to the configure script. The correct way is to override LDADD INCLUDES and PATH so that the configure script can find everything it needs.

I'll agree that there currently is no fixed way of doing things. Some configure scripts allow you to specificy an option, and others expect you to pass variables down.

> Anyway, the author is 100% right.
> Automake/autoconf is an horror that only
> has one quality for it: it does not work
> too bad.

The author like you, has not read up on the autotools documentation.

26 Jun 2003 09:14 tmh

Re: You're dead on

> In addition, automake/libtool/autoconf
> are a versioning nightmare. Output like
> this is not uncommon:
>
> checking for libtool >= 1.3.4 ... yes
> (version 1.4.2)
> checking for autoconf >= 2.52 ... Too
> old (found version 2.13)!
> checking for automake >= 1.6 ...
> You must have automake 1.6 or 1.7
> installed to compile The GIMP.
> Get

Nice FUD. This shouldn't happen unless your system clock is so skewed "configure" thinks it needs to rebuild itself.

26 Jun 2003 22:17 elgreen

Re: A coindience?

> Most ranting comes from people who don't
> understand the auto* tools (proving that
> by writing things like you have to have
> automake installed to compile things or
> you have to know m4 to create autoconf
> macros) and who don't remember how


Indeed. For example, the cryptic error message that
he described is something that happens with
automake, not with autoconf. I do
not use automake so I don't have that
problem with the various programs that I maintain --
my generated 'configure' scripts run pretty much
*everywhere*, even in places that have no
'autoconf' or 'automake' installed -- such as, e.g.,
the typical IRIX or AIX system.


He whangs on autoconf when the real problem is
automake. Typical of someone who doesn't
understand the tool -- he picked on the wrong
target!


(This is not to say that autoconf isn't hairy enough
as it is! Just that blaming 'autoconf' for the problems
of 'automake' is, as Spock would put it, "illogical"!).

26 Jun 2003 22:22 elgreen

Re: agreed

> coherent form, preferably using sane
> tools (who the hell uses m4?).


I used to generate my entire web site via a complex
set of m4 macros. Not my fault if you're not geek
enough to do that :-).



> It would also be nice if there was a GUI
> option so less experienced people could
> install without having to use the
> console.


There is such a tool. It is called 'krpm' or 'gnorpm',
my friend. Less experienced people should not be
messing with source code, and shouldn't even
know what a compiler *IS*, much less be invoking
it.

26 Jun 2003 22:32 elgreen

Re: Raggin' on auto*

> How often did you try to write a config
> script? I tried two times but each time
> I had big problems. Last time I tried to


Well, y'know, that's why they pay us computer
programmers the big bucks -- to figure sh** like that
out. You sound like one of those whiner web
weenies who think that slinging HTML is
"programming". It's not. Programming is work. Part of
the work is
learning the tools. 'autoconf' is a hairy tool, but it
works, and works well. 'automake'.... well, the less
said there the better, I do not use it and refuse to
use it, because my code must be portable across
multiple operating systems, including those upon
which 'autoconf' is not installed.


So, good luck with learning Linux. Maybe in a few
years you'll know enough to know the difference
between an 'autoconf' problem and an 'automake'
problem. And BTW, the dependencies issue doesn't
belong to either tool -- that's a Linux (and Unix in
general) problem, where OS venders keep
changing where they sling things. Don't blame
'autoconf' for things it's not responsible for.

27 Jun 2003 00:34 pfremy

Re: Comments and alternatives
% Maybe you should read the documentation
> sometime.


Why should I ? I am just a user. The point of configure is to
make it easy to distribute programs, so that the user has
only simple steps to accomplish to get the program compiling:
./configure; make; make install


Now you are telling me that I should read the whold autoconf
documentation just to compile a program that was distributed
?


> Unknown options are silently
> passed to any other configure script run
> by the top level configure script.


I am sure there is a reason for that. The problem is that it
makes it a pain in the ass to use. So we have a tool that is a
pain in the ass to use for a developer and for a user. The only
thing auto* tools have good is that they work.


> Also there's no need to use such options
> to the configure script.


Then why do they provide them, can you tell me ? Why is it
so standard if it is so useless ?


> The correct way
> is to override LDADD INCLUDES and PATH
> so that the configure script can find
> everything it needs.


In this case yes, but there are other cases where you really
need the option. For example, when compiling vim, you can
choose with KDE, Gnome, Gtk and X gui. If I make a mistake
in the spelling, I will end up with a Gnome GUI and no error
notification. Not acceptable. I have to dig through the 20
pages of output of configure to check that he has accepted
my options.


> I'll agree that there currently is no
> fixed way of doing things. Some
> configure scripts allow you to specificy
> an option, and others expect you to pass
> variables down.


And none of them report an error which the most annoying.


> The author like you, has not read up on
> the autotools documentation.


Personally, I won't bother. I distribute my programs with a
Makefile. Yes, it is not portable. But any developer can fix it in
20 seconds if it does not compile.


27 Jun 2003 11:35 paolino

Re: Definitely

>
> % I couldn't agree more. The
> % automake/autoconf
> % tools are a nightmare to use, poorly
> % documented,
> % and hard to maintain. As a developer
> I
> % really
> % struggled to make just a few
> % modifications to my
> % project's auto* setup (created for me
> by
> %
> % KDevelop).

Ever tried to read, say, openoffice generated HTML?
IMHO, autotools work just right, are portable, effective, and once you understand the concept, you just use it easily... Maybe get GNU Documentation about it?

28 Jun 2003 01:58 tmh

Re: Comments and alternatives

> In this case yes, but there are other
> cases where you really
> need the option. For example, when
> compiling vim, you can
> choose with KDE, Gnome, Gtk and X gui.
> If I make a mistake
> in the spelling, I will end up with a
> Gnome GUI and no error
> notification. Not acceptable. I have to
> dig through the 20
> pages of output of configure to check
> that he has accepted
> my options.
>

Well I neither use GNOME nor KDE, so I have no idea what you had to go through. I avoid both desktop managers because they are not necessary as long as you use a reasonably lightweight window manager.

Also I don't use a graphical editor (emacs in console mode is fine in a large terminal).

But I digress.

If a package has you looking through tons of output without good documentation, then maybe you should ask questions on the mailing list and alert the maintainers/authors to the lack of clarity.

Free software is about choice, and freedom. Try to keep that in mind.

> And none of them report an error which
> the most annoying.

Actually this isn't always true. It depends on what mistake you make. Try "configure --foobar" and then try "configure --enable-foobar."

It's a design decision not to check the option completely because as long as it "looks" right you can pass it down to other configure scripts.

Can this be fixed? It probably can with some redesign by the autoconf poeple. Has anyone bothered to yet? No.

> Personally, I won't bother. I distribute
> my programs with a
> Makefile. Yes, it is not portable. But
> any developer can fix it in
> 20 seconds if it does not compile.

There's a lot more to be gained than just compile time options. The whole autotool set gives you incredibly good dependency checking, a slick and streamlined installation procedure, along with complete portability for building libraries (see libtool).

Redoing all of the autotools work will hurt for each project you maintain with big makefiles will hurt.

Maybe you should put that effort toward solving this problem in the autotools if it annoys you so much. Remember, free software is about freedom and choice.

29 Jun 2003 04:52 bonzini

Portability is always hard to achieve
I completely disagree. Lack of proficiency in using the tools is not an excuse for the problems encountered when building programs. Yes, autotools (especially autoconf) do have quite a steep learning curve, but not because they involve learning m4 or unraveling 20kb Makefiles (not 20k lines of code -- that's the configure script :-), rather because a tool that helps making programs portable still asks you to know the pitfalls of portability. The mentioned book is wonderful and taught me all that I know about Autoconf, Automake and libtool: it is not a book for users, because users *would not* need to know anything about the autotools if the developers tried to learn rather than improvise.

Autoconf helps testing for features and works like a charm, but programs still have to be coded in order to degrade gracefully when such features are absent. It is surely easier to use for the developer than trying to compile the program on a thousand architectures and figuring out what's present on each of them and what's not; and if configure scripts were written sanely, it would be surely easier to use for the user than modifying a Makefile to suit his OS.

Autoconf is wonderfully extensible, but is indeed wonderfully hard to extend it *in a proper way*. So Autoconf does help testing for packages, but it does it slightly worse, because developers still have to know whether the APIs changed in different versions of the libraries and to code the appropriate tests. Anyway it gives better error messages than a compiler or linker error at make time -- if it does not, it is a bug, because a properly autoconfiscated program should *never ever* fail to build after the configure script has run.

Finally, Autoconf has evolved taking into account some of the problems mentioned in the article. config.cache is disabled by default nowadays, too bad many programs use a six-years-old version of Autoconf (2.13) only because the developer is lazy and does not want to type "autoupdate" (yes, this tool was very buggy in 2.50 but it has been completely rewritten since and is now foolproof). As a result, many configure scripts do not work under zsh (i.e. under MacOS X), are bound to use obsolete versions of Automake and libtool which carry their own share of bugs, and so on.

Any tool meant to replace the autotools would have exactly the same pitfalls of the latest Autoconf. Developers should learn to use the autotools properly instead of agreeing with users who bash against them.

29 Jun 2003 05:09 bonzini

Re: sort of a pain

> i concur, the auto* tools can be a pain
> in the arse sometimes. mainly when you
> are dealing with an architecture that it
> doesnt like (i had to hack every single
> "configure" script i ran under OSX 10.1
> to get stuff to build)

As I already said in another reply, that's because people use six-year-old (not 3-weeks-old!) Autoconves.

> i've recently switched to
> gentoo linux, and their portage database
> has so many programs in it, i rarely run
> configure/make scripts myself anymore --
> portage does it for me, with all the
> options.

Yes, ports are the solution if you want the flexibility of building from source code and the ease of binary distribution. But ports cannot replace autotools, they can only be layered over.

Paolo

30 Jun 2003 12:49 leshert

Re: A coindience?

> I considered replying to this. Then I
> decided that the formatting and overuse
> of capitals spoke for themselves.

Not to mention the 31337 user name and the full-quoting of the original message. As a Python advocate myself, this embarrasses me.

02 Jul 2003 09:00 amoe

Autotools are cool; new tools could be cooler

I think we should keep the autotools around purely on the grounds of their esotericism. They're probably the most widely used programs around to have been written in shell (raw, ancient Bourne, no functions), M4 (an arcane macro processor) and Make (syntactic tabs). And yet they've held up surprisingly well over the years, and enabled porting of C programs to *BSD and all sorts of proprietary *nixes. So much respect to autotools, because they are an extremely clever hack in the spirit of Unix.

However, if we are thinking about replacing them, we should not limit our scope to mere building: we should be sure to create a fully featured system for end-user installation as well, and one that is cross platform. This is a tough goal, but doable, and if we can deploy programs on Win, Mac and *nix with equal ease, it will be a boon to the community.

02 Jul 2003 09:18 amoe

Re: What about ant?

>
> % ant.apache.org/
> %
> % I think a new auto{...} is need but
> it
> % should be developed using something
> like
> % RFCs, shoud be lightweight (ant
> isn't).
>
>
> You know you hit it on the head. Part
> of the reason why I have given up on C
> and C++ is because the build process.
> There the Java people did it right!

Um. Ever tried to resolve a complicated CLASSPATH mess? Tried to make an elegant Java library without jars, only to find that meta-information required for use can only be used with jars? Ever written an Ant build.xml without screaming at your keyboard when your fingers drop off? Ever been driven mad waiting for the damn hack of a thousand shell scripts in nonstandard Micros~1-esque paths to load (Ant, that is)?

02 Jul 2003 10:34 tmh

Re: how dare you?

> I spent two - fully wasted - days
> looking for documentation, trying to
> write makefile.am, configure.in,

Have you been to the GNU website? Even if you're not a fan of texinfo you can view the on-line manuals there for all their software.

> what.ever, then trying to copy someone
> else's, and finally stuck to my old
> hand-written shell script which
> compiles. gradually I upgraded this to a
> little bigger shell script, which is
> capable of recompiling itself.
> dependancies? it's my project. it runs
> on my machines.

Everytime you work on a project you'll be writing the same scripts over and over and over. Or you could do the right thing and write your configure script for autoconf and extend it by writing your own test cases in m4.


> well, I like makefiles, I like
> configure-based projects, because of the
> advantages mentioned. but writing one is
> - at least - a pain in the ass. we
> definitely need something new, so that
> someone like me - someone programming on
> a little application - is able to make
> it public in a simple way without one
> year developers work to write an install
> script.

It's called the autotool suite. You can do something as simple as use "autoscan" and follow the automake tutorial and you'll be on your way. Took me less than a day to get my first autotool setup working.

05 Jul 2003 18:14 smithdev

Re: A coindience?
I'll chip in for the fairly sophisticated user who is new to the autotools.

I've the goat book, the potto book (that O'Reilly one) and a couple of others. I've read the info bits.

I couldn't even get started until I found the autoproject script here on Freshmeat.

I think that the best thing that could reasonably happen is cribbing an idea from the kernel community: a make menuconfig to streamline the process.

11 Jul 2003 19:58 rocky

Bravo! A debugger for Make? or M4?
I commend Andrew McCall for a funny, insightful and thought-provoking Editorial. Clearly he has touched on such a sore nerve shared by many others (including myself) that the discusson provoked the most people and contains the most verbiage of any that I've read for a freshmeat editorial. And although the editorial made my day and served to vent my frustration in a entertaining way, I am also grateful to the author for having enlightened me a little on SCONS, CONS, and A-A-P.

There is a lot in the editorial and discussion I agree with and won't repeat. However I'd like to share a couple of my experiences.

I wrote a debugger for bash (bashdb.sourceforge.net (bashdb.sourceforge.net) or look on freshmeat) in large part to be able to debug configure scripts. And I felt the debugger was industrial strength when I was able to debug even the most trivial configure script. A 4-line configure script will generate hundreds of lines. In fact, the first time I tried to debug a real configure script I thought the debugger was in an infinite loop. On debugging the debugger, I realized, it just was that the configure script was 30 thousand lines long and it was just taking a long time to read the script into an array. After that I put something in the debugger to print out the line count every 1000 lines.

Shortly after feeling elevated that I could henceforth debug the configure scripts, I was immediately depressed in realizing that many times I'd like to be able to debug "make" and there is not really any sort of debugger for that. One can get verbose output, but I don't consider that a debugger which lets you stop at some point in the execution, specifiy variables to examine and run commands at that point.

Ditto, I think, for m4. (Pssst... debuggers have been around for 40 years if not more -- even older than these tools).

Of course Perl and Python both have debuggers and there are even GUI frontends for them. (I use GNU/Emacs, but check out the modernization to ddd I did with Daniel Schepler). The reason a configure script is something like 3000 lines long is precisely because lowest common denominator shell (perhaps not even POSIX) so configure can't doesn't use arrays or hash tables let alone functions or modules.

Which brings me finally to one last comment on automake - how unnaturally it seems to me to work for Perl programs or to have Perl replace parts of the automake system. I have a number of Perl projects that use automake. See for example ps-watcher (ps-watcher.sourceforge...).

Perl programs tend to use their own configure system: the MakeMaker module which I've found lacking for standalone programs like ps-watcher or recycle-logs (recycle-logs.sourcefor...). The Perl testing system I think has always been a little superior to that used by the default automake system. But somehow the two seem to me a little at odds with each other or at least they don't have provisions for working with each other; in my opinion there is something I can't really quantify that makes using the two together a little awkward. Likewise, I've used Perl for post-configure scripts (in packages written in Perl so the argument about having to install Perl is moot, because its a strong requirement of the package). Again there's something I can't really define, but it always feels a little foreign when I do this.

12 Jul 2003 22:45 Avatar jdfulmer

On the other hand...
If a rookie developer is intimidated by m4 and sh, then I guarantee he's not producing complex work. In that case, he can surely get by without writing an m4 macro. Out of the box, it's trivial to employ autotools to test for C funtions and determine the compiler and linker. But if a developer is smart enough to determine that a system is without strdup, is he smart enough to code around it? Probably not.

The author notes the prevalence of autotools based projects. Price is not an issue here. Autotools grabbed mindshare not because it was free, but because it was good. At a time when significant projects have switched from alternatives TO autotools, the author picked a curious moment to complain about it...

13 Jul 2003 09:35 RoceKiller

Re: Binaries & Why Makefiles in Python are bad...

> Who wants to compile KDE, or Mozilla, or OpenOffice?

Me! And everyone else who wants to optimize his software for his specefic architecture. And what about optional support for fx X and ncurses? You'll end up having to distribute a big amount of binaries, and a lot more if you want to distribute to the diffrent architectures. BSD's ports users, need to compile. And so do gentoo users.

13 Jul 2003 14:00 smithdev

Re: agreed

>
> % coherent form, preferably using sane
>
> % tools (who the hell uses m4?).
>
>
> I used to generate my entire web site
> via a complex
> set of m4 macros. Not my fault if you're
> not geek
> enough to do that :-).
>
>
> % It would also be nice if there was a
> GUI
> % option so less experienced people
> could
> % install without having to use the
> % console.
>
>
> There is such a tool. It is called
> 'krpm' or 'gnorpm',
> my friend. Less experienced people
> should not be
> messing with source code, and shouldn't
> even
> know what a compiler *IS*, much less be
> invoking
> it.
>

In addition to the auto-tools and sendmail, this is about the 4th application I've ever heard of that uses m4.

I beg to differ on your Less Experienced People comment. If LEPpers don't get in and figure stuff out, how will their leprosy be cured?

14 Jul 2003 04:58 ed_avis

Re: Improvements

The RMCH paper goes into some detail with real benchmarks demonstrating that having a single makefile _is_ a lot faster than having several which need to be scanned with recursive 'make' in each directory. If for some reason you don't find the benchmarks convincing, think about the relative costs of reading a single file and analysing it (mostly CPU bound) versus changing directories, invoking recursive 'make' processes and reading several makefiles - you'd intuitively think that a single file will be faster. Lastly, my own experiences certainly confirm what the RCMH paper says; recursive make does add a good few seconds to build time as separate 'make' processes are run in each of several subdirectories (on the particular project I'm thinking of, about 50 in total). Even if nothing needs to be built in that directory you still pay for recursively invoking make and reading the Makefile.

If you really have not seen any of the problems the paper describes then you're quite lucky. Myself I have seen exactly the classic makefile problems (files that aren't rebuilt when they should be, builds that take too long) and the classic broken workarounds described in the paper (running 'make' more than once; trying to ensure that make runs in a particular order rather than specifying dependencies and letting make figure it out).

Can you edit a source file anywhere in your project, type 'make' and have the executables correctly rebuilt? If not, your build system is broken. Does this work reliably with the -j flag? If not, your build system is broken.

14 Jul 2003 07:09 reidrac

Re: A coindience?

>
> % Most ranting comes from people who
> don't
> % understand the auto* tools (proving
> that
> % by writing things like you have to
> have
> % automake installed to compile things
> or
> % you have to know m4 to create autoconf
>
> % macros) and who don't remember how
>
>
> Indeed. For example, the cryptic error
> message that
> he described is something that happens
> with
> automake, not with autoconf. I do
> not use automake so I don't have that
>
> problem with the various programs that I
> maintain --
> my generated 'configure' scripts run
> pretty much
> *everywhere*, even in places that have
> no
> 'autoconf' or 'automake' installed --
> such as, e.g.,
> the typical IRIX or AIX system.
>
> He whangs on autoconf when the real
> problem is
> automake. Typical of someone who doesn't
>
> understand the tool -- he picked on the
> wrong
> target!
>
> (This is not to say that autoconf isn't
> hairy enough
> as it is! Just that blaming 'autoconf'
> for the problems
> of 'automake' is, as Spock would put it,
> "illogical"!).

I agree with you, even using automake.

I've developed a little game using SDL and it compiles and runs without problems... even in Solaris 8/9. And I'm a auto-tools novice. My be that was my lucky day? Dunno :-)

I cannot understand the article when that dude says you need auto-tools installed in order to make configure run. Well, that's not true (oh, not needed, I don't want to say what's true or false). I can realize just one case: the configure stuff is wrong.

And the fact someone distribute a wrong configure, is that auto-tools' problem? I think not :-)

No matter ant, buildtool, or whatever you wanna use is great... if the developer using it is a jerk, you'll get the same good configure & build environment you had with auto-tools :-)

May be all the problem is auto-tools are not trivial to use, and people is sometimes lazy to read the docs available. I must admit it took me about 3 hours and quite tests to be sure all the stuff was ok.

I don't agree at all with the article and I think your point of view is almost OK.

14 Jul 2003 19:54 jamesh

Re: You're dead on

> In addition, automake/libtool/autoconf
> are a versioning nightmare. Output like
> this is not uncommon:
>
> Of course, if you upgrade you break
> something else.
>

You are comparing apples to oranges here. You are trying to build Gimp out of CVS, and none of the auto* generated files have been checked in there. To make things easier for developers, there is an "autogen.sh" script they run to call the necessary tools.

If you want a copy of the source where everything has been tested and doesn't require the tools, get a tarball. If you must build from CVS, be prepared to install the tools the developers are using (afterall, CVS is primarily for the benefit of the developers; not the users).

16 Jul 2003 10:26 IkerAriz

Re: CMake solves many of the mentioned problems
Could not agree more. CMake provides a very simple language with the basics (flow control, for example) and works for both *NIX and Windows. I tried autoconf, tmake and jam but found that CMake tops them all hands down. It's also actively developed with new features added all the time. It provides portable routines for finding libraries, headers or any file (wherever they may be), it easily compiles code conditionally (eg, one header, multiple impl files - one for each platform, etc), it supports config.h style includes, and more.

Finally, its a small, self-contained package and clearly written piece of code.

16 Jul 2003 13:25 pphaneuf

He's talking about automake, not autoconf

Most of the bad things he found with autoconf are actually problems in automake.

autoconf by itself doesn't make configure run itself, it doesn't check for autoconf or automake on your system, doesn't generate makefiles (it can mess an existing makefile, but it is automake that *writes* makefiles, not autoconf), etc...

My current solution (in XLPC (freshmeat.net/projects...), for example) is to avoid automake like the plague and write my own makefiles. It's a pain, but automake is so awful, it's clearly the lesser evil.

23 Jul 2003 06:11 Avatar tthomas48

GConfigure
For those who haven't seen it yet you should try out GConfigure:

freshmeat.net/projects...

It's not a completely mature package and may require some hacking to get working on your system, but there's nothing more satisfying than being prompted by a GUI to check off your autoconf options and after clicking ok, ending up with an RPM installed on your system.

24 Jul 2003 05:42 artichoke

Re: GConfigure

> but there's nothing more
> satisfying than being prompted by a GUI
> to check off your autoconf options and
> after clicking ok, ending up with an RPM
> installed on your system.

Especially on your head-less Debian boxen :)

27 Jul 2003 05:20 Avatar skarnet

Re: Comments and alternatives

> The only thing auto* tools have good is that they work.

Not even. I am a
diet libc (www.fefe.de/dietlibc/) user, and I find it very difficult to link autotools-using programs against the diet libc. Why? Because the configure scripts detect that my system is Linux, and immediately assume that I'm using the glibc. Did I hear "open-mindedness" ?
So, as far as portability is concerned, autotools are a failure - I can't even make them work on my i386-Linux.

> Personally, I won't bother. I distribute
> my programs with a
> Makefile. Yes, it is not portable. But
> any developer can fix it in
> 20 seconds if it does not compile.

I think this is the way of wisdom.
Personally, I have developed my own set of libraries hiding non-portabilities
(skalibs (www.skarnet.org/softwa...)), and build my software on top of it. Any portability problem is centralized in skalibs, and the build-time tests are understandable. I have no need for autotools, and I believe that no one has, either.

27 Jul 2003 09:34 eperhinschi

this is a geek world! ... or not?

> I used to generate my entire web site via a complex
> set of m4 macros. Not my fault if you're not geek
%

I thought that the editorial was about tools that make common peoples' life complicated; geekishness is another issue ....

> [...]
>
> There is such a tool. It is called 'krpm' or 'gnorpm',
> my friend. Less experienced people should not be
> messing with source code, and shouldn't even
> know what a compiler *IS*, much less be invoking
> it.
%

Why should't they? Are you afraid that the less experienced people might like it, learn it and take your job ;-) ?

The editorial was also about taking care that the user, experienced or not, does not loose precious time.

I am not experienced as far as C or C++ programming is concerned, but I can survive if

$ configure;make;su;make install

does not work smothly.
I don't expect the developers to see the future and make room for all the changes that might occur and I don't expect them to addapt their source code to those changes even if they lost interest in the program. I am happy if I can find out that I have to make some soft links to persuade the configure or make programs that I run RedHat and not Debian or Slackware, or that I have to modify some lines in config.h, and I am extremely gratefull when I find in config.h or some place else

# for debian users ...

or

# for slackware users ...

The contemporary "unexperienced users" are not smarter that those that caused the term "user friendly" to be coined, but neither are they using hardware worth bilions on a time sharing basis. Most of them are going to waist their personal time and run the risk of trashing hardware worth about 800$. Why not let them choose if they want to spend some time solving a problem and learning more about their tools in the process?

I am not an authority in what m4, autoconf or such other "thingies" are concerned. They worked great for me most of the time, and when I decided that I loose too much time recompiling I decided to switch from Slackware to Debian for the sake of those 9000+ of precompiled packages. I guess that m4 etc. are good enough, but if there are simpler solutions, why not use them if they can make both the developer and the end user happy?

30 Jul 2003 08:56 pltxtra

Re: Comments and alternatives
% Why should I ? I am just a user. The
> point of configure is to
> make it easy to distribute programs, so
> that the user has
> only simple steps to accomplish to get
> the program compiling:
> ./configure; make; make install

Yes, but yet you keep hacking distributions that
don't compile... If a distribution does not compile
you can't blame the compiler, the configure script
or anything like that. It is the developers responsibility
to see to that it works, and that they understand
the tools that they are using.

The problem as we all know is that there is no standard
for anything really, different *IXs can be very different..
And therefore we need something like autoconf/automake.
It's the developers choice to make, so if something doesn't
build, contact the developer about it.

I wonder how many programs that are downloaded, and then fail to build and the user just throws it in the trash. Then the user goes on and on, nagging and nagging. What he/she should do is something CONSTRUCTIVE.. It's free software, if it doesn't work either fix it, or live with it.

30 Jul 2003 12:54 whoffmanny

CMake
CMake is an open-source tool developed to stop the insanity. See www.cmake.org for information about CMake

Before developing CMake, its developers had years of experience with autoconf. CMake is truly cross platform and supports running autoconf like tests even on windows platforms without having to install a UNIX emulator. CMake does not depend on any additional tools like m4, or perl, or python. The only requirement is that CMake itself is installed. There are binaries for most platforms, and the source is C++ that can be compiled for most platforms.

CMake generates native build files. So for UNIX it generates makefiles (with full support for automatic C/C++ depend information generation), and for Windows it can generate IDE project files and various makefiles.

The input to CMake is a simple extendable command language. We have found it easy to port many autoconf based projects to CMake.

Here is a small example of some autoconf like tests in CMake:

CHECK_INCLUDE_FILE_CONCAT("sys/types.h" HAVE_SYS_TYPES_H)
CHECK_SYMBOL_EXISTS(socket "${CURL_INCLUDES}" HAVE_SOCKET)

Creating executables and libraries (shared and static) can be done with simple commands:

To create an executable:

SET(FOO_SOURCES foo.cxx bar.cxx car.cxx)
ADD_EXECUTABLE(foo ${FOO_SOURCES})

To create a library:
ADD_LIBRARY(bar ${BAR_SOURCES})

CMake also provides a nice interface to allow users to adjust build options. This avoids the problem of how to enable GTK. The interface can be programmed from the input files to cmake to give users good feedback on what they need to enable a feature.

08 Aug 2003 02:55 jmmv

Buildtool
I have seen this article has mentioned several
alternatives to the auto* tools. I would like
to recommend buildtool
(my own project ;). It can be used to write
configure scripts and makefiles in a very easy
way; it provides
functionality like libtool and pkgconfig, plus
some other minor things...

It is a single, integrated
tool, instead of multiple independant ones. And
it does not generate files in the sense of auto*
tools. Configure scripts and makefiles are kept
short and simple and parsed at runtime.

And it will (hopefully) not suffer auto*
tools versioning problems when it's marked
stable ;). Cheers

08 Aug 2003 16:40 golrien

Jam
I can't say enough nice things about Jam. I use the Freetype version (www.freetype.org/jam/) rather than the original (it nets me support for Windows 9x and junk).

Basically, imagine Make, but with a whole bunch of builtin rules and portability already there, so you don't have to screw around with anything. It'll even work with Autoconf :)

18 Aug 2003 19:32 macdaddy

Never
Has anyone else never had a single problem that Andrew points out? Or does anyone else not see any of what he points out as a problem? I've been building from source for a helluva long time and I have yet to find a auto* problem that didn't have a simple fix. Is anyone else NOT have problems?

26 Aug 2003 13:54 gianni

Check out MakeXS
I just published MakeXS (www.makexs.com/) which solves some of the issues you bring up.

Needless to say, I agree with many of the issues you talk about. The approach taken by MakeXS is to make it as easy as possible for the developer.

31 Aug 2003 15:43 mikerm

Re: auto* work fine
% I'd have to mostly agree, most of the
> autoconf nghtmare things are from very
> vey poorly written configure scripts
> and/or makefiles. configure should
> never need to run more than once (should
> never happen in a makefile), among other
> stupid things developers often seem to
> do (this one has caused me to most
> problems).
>

I set a new personal record this weekend just gone. Downloaded eight pieces of software from Freshmeat and tried to configure, make, install. NONE of them worked. Not one out of the eight. And I am using Mandrake 9.2, so you would think that everything I need would be up-to-date.

I'm sincerely trying to move from Windows to Linux because I hate the way Microsoft works. But at least I have seldom had issues installing software on Windows, while I have frequently had them installing software on Linux. I don't believe I should have to be a C programmer and a Unix-head in order to install a program. I know that some Linux people have a "leeter-than-thou" attitude that says if you "can't be bothered" to learn all the complexities, you shouldn't be using Linux. But I believe if you are ever going to get ordinary users (even users at my level - I'm reasonably technical, just not very familiar with Unix and C and don't see why I should have to be in order to simply install some software) migrating in large numbers to Linux, then the software installation needs to be made much, much, much easier. And the documentation much better (it certainly helps if the developers mention on the download page what other packages you need to have installed already, rather than letting you find out by looking at the error messages).

01 Sep 2003 04:21 stigbrau

Re: auto* work fine

> I set a new personal record this weekend just gone. Downloaded eight pieces of
> software from Freshmeat and tried to configure, make, install. NONE of them worked.
> ...
> I'm sincerely trying to move from Windows to Linux because I hate the way
> Microsoft works. But at least I have seldom had issues installing software on
> Windows, while I have frequently had them installing software on Linux. I
> don't believe I should have to be a C programmer and a Unix-head in order to
> install a program.
%

Would you normally download source and compile it yourself on windows? Otherwise you're comparing apples and pears. It is pretty damn easy to install precompiled software packages prepared by (and for) most major distros.

I am reasonably confident installing packages from source on Linux, but I wouldn't even know how to start on Windows. Installing from precompiled binaries is generally much easier/more convenient on Linux than Windows, IMNSHO (at least in the case of Debian -- don't know how good other distros are).

04 Sep 2003 06:04 Paslophunk

Amen.
This is the best article I have ever read on Freshmeat!

You know, I actually miss just editing a config.h or
whatever, it seemed so much easier (when the
dependencies was <10:)

07 Sep 2003 17:06 stark

Put the blame where it belongs: Automake
All the complaints in the article are PRECISELY the kinds
of complaints that autoconf programmers worked very
hard to avoid. For example, it's IMPOSSIBLE to have
version skew with autoconf because the configure script
is completely pre-generated before being shipped. As
far as the user is concerned it's a simple sh script.

ALL of these problems stem from the "automake"
monstrosity that creates many more problems than it
ever solved. Unfortunately people now tar autoconf with
the same brush as automake and libtool whenever they
run into problems with these. Sigh.

10 Sep 2003 00:16 jespa

Re: Never

Well, I must say that I usually compile all my packages manually (every gnome package, for example) and It is NOT usual to find these problems.

There is an honorable exception : I have tried 10 times to compile evolution-1.4 and it has EVERY problem described in the article. Problem whit DB3 versions, with gtkhtml-3 location, and many more....

24 Sep 2003 14:08 matheny

Re: Never
As a project admin, I run into these problems. But that's only because I am having to generate all the configure files. As a user, you should never run into these issues using the Autotools, since the files you get have all been generated before hand. There is no need for m4, autoconf, automake, or any of the other autotools if you are an end user. The only tools you should need as an end user for compilation, are the bourne shell and a version of make. If you are a developer using the autotools, accept the fact that all the developers need to roughly be using the same version of the Autotools. Making this concession is a pretty reasonable tradeoff considering the ease it brings to your users in compilation, and the ease it brings to developers in terms of both increasing portability (autoconf does system checks I wouldn't otherwise think of) as well as decreasing the ammount of time needed for a development cycle.

> Has anyone else never had a single
> problem that Andrew points out? Or does
> anyone else not see any of what he
> points out as a problem? I've been
> building from source for a helluva long
> time and I have yet to find a auto*
> problem that didn't have a simple fix.
> Is anyone else NOT have problems?

24 Sep 2003 21:05 trejkaz

hum
Maybe what we need is just to educate every developer in creating something like an ebuild, so a single command will install any program regardless of what build system it actually uses...

It won't solve the problem, though, because then include files move, libraries move, etc. We would also need something like KDE's 'kde-config' script for every single package on the system. Better yet, if every package did it, you would just need one program reading the installation paths from a database. We already know what files each package holds, why not just store the CFLAGS a package requires to be used?

06 Oct 2003 09:14 clg

Re: Put the blame where it belongs: Automake
Yes, yes and yes !
This article doesn't apply to autoconf at all, at least from the user point of view, since the user never use autoconf.
From the developper point of view autoconf is a useful set of "metaprogrammation" tool to create a shell script.

When some install complains about the autoconf version it only proove that the Makefile have been generated by automake, which indeed generate Mafiles that rely on autoconf. IMHO this is an heresy.

From my developper point of view I like to have the control on a minimum of 50% of the file present on a directory (ie to, at least, understand what they are there for). With makefile (generated by some custom script-shell of hand written) and autoconf, I do control everything.
If I add the complete auto* chain, the work turn more difficult than it would be to write everything in plain sh.

In short autoconf rules, automake sucks, and I'm happy to see that this opinion is shared by at least 1 other person in the world.

16 Feb 2004 10:24 pelliott

an immense number of details are embodied in auto*
An immense number of details are now embodied in
the autoconf/automake/libtool system. No doubt
when these tools where fist coded, the programmer
coded for what would be "common sense". Then the
idiosyncrasies of the target systems came along
and caused bugs. Each bug is a patch in the long
history of one of the tools. To replace these
tools the history of autoconf/automake/libtool
must be understood in detail or repeated. (Those
who forget history are condemed to repeat it.)
Noone wants to do it, when there are more fun
things to do. Most of the tools suggested as
replacements do not have as wide usage as the
auto* tools. If they did have such a wide usage,
they would begin to have the long bugfix
history.In short the auto* tools are complex
because they help manage an immense amount of
complexity! They are definately easier to use
than to manage that complexity by hand or to
rewrite the auto* tools! It is unlikely that
anyone will want to rewrite those tools now.

The mice agreed that the cat should be belled.
But who will bell the cat?

13 Jun 2004 16:47 rocky

Re: Bravo! A debugger for Make? or M4?
Well, after about a year later and no takers on my call for a debugger for GNU Make, I've written one myself. See the freshmeat announcement
here.

11 Jul 2004 15:50 dilinger

Build system not mentioned
Check out cmake (www.cmake.org/) sometime. I've recently started using it for my projects, and I've found it to be very flexible.

19 Jul 2004 23:20 ensonic

Be fair
Everyone who uses not Linux but e.g. Solaris, the autotools are a great thing. If you ever downloaded a packages that just uses Makefiles and assues that stuff is there where is is under linux, then you know what I mean.

The auto-tools are not optimal and I agreee that it is really urgent to put all the docs into a shape where they can easilly be updated (e.g. docbook via media-wiki), but the tools work and are somewhat the standart way of doing it. Users have aquainted basic knowledge of them. If the unpackage something that requires them to do something else, then the experience is not always better.

And finally there are ways to make the auto-stuff easier. It started with packages providing the *-config files and continues with having pgk-config.

The next time, you write such an article, please be preceice and fair. And if you don't like the state of documentation of the autotools, do something against that. You are very welcome for sure and can earn a lot of fame when you are succesful with that (definitely more that by writing such an article).

19 Sep 2004 00:56 merlin262

Re: Be fair
Note: I am the author of the original article.

I think some background information would be useful about this article, I place the blame on autoconf, squarely here, when in retrospect, it should have been placed on lazy developers misusing autoconf. The article was written out of shear frustration of the following:

1. Trying to get an existing (large) project moved over to autoconf instead of traditional makefiles, and having numerous troubles along the way.

and

2. The fact that at the time, I was working on a linux distribution, and constantly found myself running into poorly written, badly designed, unacceptable configure scripts, or projects that where shipped with only an autogen.sh, relying on you having a copy of configure there. (Where you run into problems with having the right version of autoconf installed.)

I freely admit it was a rant, but I'd like to think that some people at least started looking at the problems with their autoconf scripts and fixing them, because many of the problems I listed, are now, not as apparant. Newer versions of autoconf, and automake have definate improvements, and I'm pleased to be using them, despite the high learning curve. Although, last I checked, documentation was still sorely lacking. Perhaps if I get some spare time, I'll take up your suggestion and write a tutorial for it.

I still think that we could do better though, as evidenced by software like scons, the linux kernel build system, and ant.

> Everyone who uses not Linux but e.g.

> Solaris, the autotools are a great

> thing. If you ever downloaded a packages

> that just uses Makefiles and assues that

> stuff is there where is is under linux,

> then you know what I mean.

>

> The auto-tools are not optimal and I

> agreee that it is really urgent to put

> all the docs into a shape where they can

> easilly be updated (e.g. docbook via

> media-wiki), but the tools work and are

> somewhat the standart way of doing it.

> Users have aquainted basic knowledge of

> them. If the unpackage something that

> requires them to do something else, then

> the experience is not always better.

>

> And finally there are ways to make the

> auto-stuff easier. It started with

> packages providing the *-config files

> and continues with having pgk-config.

>

> The next time, you write such an

> article, please be preceice and fair.

> And if you don't like the state of

> documentation of the autotools, do

> something against that. You are very

> welcome for sure and can earn a lot of

> fame when you are succesful with that

> (definitely more that by writing such an

> article).

>

29 Sep 2004 07:56 twysf

Re: MPlayer core team likes to swear endlessly

> It makes boring tasks easier and helps

> in setting up the right dependencies. A

> Makefile.am typically is just a few

> lines.

Obviously, the Ccide project at:

www.sourceforge.net/pr...

needs serious help. The Makefile.am file contains

164 statements and there are a total of 1290 *.in

file statements. There are just 2721 lex, yacc, and *.c

source statements in the program. Somehow, I

don't think that yet another auto* read is going to

help much.

28 Jan 2005 20:39 thenewme91

Alternatives for automake/autoconf? What about alternatives for makefiles?
I've read over the page and most of the comments, and many points seem relatively valid. It should be noted that maybe 95% of the world population that has access to computers are "lazy idiots" in terms of reading docs and programming and such. This is probably what drew them to computers in the first place - since typing is so much easier and more convienent than writing and talking.

At the same time as people are saying that developers should learn how to use automake/autoconf and learn to check spelling errors and read thousands, if not millions of pages, of assorted, cryptic, and ocassionally incomplete or inaccurate "documentation", software developers are trying to get as many developers to join their teams and use their programs. I believe this is kind of ironic.

You look at the people who write HTML, and yes, it is markup, not programming. But there are similar cases everywhere. Look at things like Basic, Logo, and Turning, and you find that the english-like syntax makes it very close to not-programming. Java and C++, by this thinking, aren't that far behind, either.

Personally, I've been using linux for maybe 3 years as a 14 year old child, and I've never heard of m4. At the same time, I've learned the rudimentries of Java and C++, and don't seem to have many problems with them. As for compiling programs, I find that while most programs work, starting autoconf/automake projects is very cumblesome for the inexprienced (like everything else in the Linux world). In that kind of an environment, it doesn't seem uncommon to have to redo things 5 or 10 times. You have to be really patient.

In terms of the portability issues, it's obvious that people will use old programs and operating systems. Upgrading to a newer version or switching operating systems takes time and/or money, and is rather inconvienent. That kind of thinking isn't uncommon, especially among adults.

It's said that autogenerated makefiles and configure scripts are rather cryptic, and therefore hard to edit. It seems to me that there are also issues with the variants. Wouldn't it make sense to look at all the bugs, history, and features of makefiles, and then create something simple, machine-producable, close to english (or some other common "language", like C++, Java, Perl, Python, or Turing) and easy to parse, and make a replacement for make and makefiles and create versions for all the OSes out there (e.g. Solaris, Unix, Linux, Windows 3.1, Windows 9x, Windows NT, Windows XP, HP-AX, 64-bit OSes, whatever comes out in the future) and redub them something else?

Instead of ./configure, maybe we should have a ./installme script, and instead of Makefile we should have a Make2src or something in our programs. Of course, if we did this, we'd still include configure scripts and makefiles with our pcs. And any new tools would have to be really small (e.g. under 1-2 MB, preferably under 512 kb) before people would use it if they have any bandwidth concerns (e.g. dial-up users, DSL users that have transfer limits).

Just my two (well, rather twenty) cents.

24 Aug 2005 00:43 sja

Cons is a solution ? I can't agree.
Having used cons for two years I can state without equivocation that it is powerful, and works reasonably well, but somehow every project that used cons ended up with massive conscripts that looked ... aweful. Classic spagetti code where an apparently innocuous change in one spot creates havoc due to hidden side-effects.

I believe there is too little structure in the scripting language. It's the write-only language problem.

24 Aug 2005 00:47 sja

missed point - cross development
You didn't mention one major deficit in autoconf and friends. They don't play nicely with cross-development environments.

17 Nov 2005 08:19 frm

Bakefile
Basically, I don't agree with this article.
You can have troubles with *any* tool you use, if you don't know how to use it or you're too lazy to read docs.

Anyway I agree that auto* error messages are quite cryptic and that a lot of things could be made better.
Also, a newer integrated build system would be a great thing. But on various distro I tried (FC, Mandriva, SuSE) I could only find the classic GNU make and GCC. Never found Scons, Cons, Cmake, tmake, gconfigure, buildtool, A-A-P, Ant installed by default.
The common user doesn't even know they exist.
The common unix user knows of "./configure && make && make install".

This is why I prefer to keep my project standards with 'configure' stuff but also get all the benefits from a modern tool.
Using bakefile (bakefile.sourceforge.net) you can write all your build logic in a single XML file in a nice syntax (BIG advantage: using XML you don't have to learn a new syntax like for cmake and others) and generate using it the makefiles you need for a *lot* of build tools (autoconf on unix, MSVC, borland, mingw, watcom on win32, xcode on MacOS, etc).
Then you pack all the makefiles in your project and ship it to the user. If he's a unix user he will just need a "./configure && make && make install" (the Makefile.in is written by bakefile automatically; the AC_BAKEFILE macro allows you to write the configure.in files in few minutes).
If he's a win32 user, then he can use the compiler/IDE he wants.
This just out-of-the-box.

All other tools require that the end-user install it on their computers, first.
Bakefile is used only by the developer who created the software and is completely hidden to the end user.

Can you imagine something more backward-compatible (and backward compatibility is one of the major issues which led to the use of systems like autoconf!!) and more easier to use from end-user POV than Bakefile ?
If yes, let me know ;)

Francesco Montorsi

17 Nov 2005 08:22 frm

Re: Bakefile
hmm, sorry. My comment ended up with no newlines. Here it is a more readable version ;)

Basically, I don't agree with this article.
You can have troubles with *any* tool you use, if you don't know how to use it or you're too lazy to read docs.

Anyway I agree that auto* error messages are quite cryptic and that a lot of things could be made better.

Also, a newer integrated build system would be a great thing. But on various distro I tried (FC, Mandriva, SuSE) I could only find the classic GNU make and GCC. Never found Scons, Cons, Cmake, tmake, gconfigure, buildtool, A-A-P, Ant installed by default.

The common user doesn't even know they exist.
The common unix user knows of "./configure && make && make install".

This is why I prefer to keep my project standards with 'configure' stuff but also get all the benefits from a modern tool.
Using bakefile (bakefile.sourceforge.net) you can write all your build logic in a single XML file in a nice syntax (BIG advantage: using XML you don't have to learn a new syntax like for cmake and others) and generate using it the makefiles you need for a *lot* of build tools (autoconf on unix, MSVC, borland, mingw, watcom on win32, xcode on MacOS, etc).

Then you pack all the makefiles in your project and ship it to the user. If he's a unix user he will just need a "./configure && make && make install" (the Makefile.in is written by bakefile automatically; the AC_BAKEFILE macro allows you to write the configure.in files in few minutes).
If he's a win32 user, then he can use the compiler/IDE he wants.

This just out-of-the-box.

All other tools require that the end-user install it on their computers, first.

Bakefile is used only by the developer who created the software and is completely hidden to the end user.

Can you imagine something more backward-compatible (and backward compatibility is one of the major issues which led to the use of systems like autoconf!!) and more easier to use from end-user POV than Bakefile ?

If yes, let me know ;)

Francesco Montorsi

08 Dec 2005 04:52 rmunn

Re: Definitely
You forgot to close your quote, and now everything underneath your article is green. Here's a closing quote to take care of it...

01 Feb 2006 18:07 proskin

Autoconf/Automake not as bad it used properly

The configure script runs for a while, then exits with an error which basically translates to "You have an autoconf version which is three weeks old; please upgrade"

This is a clear sign that the package maintainer used Automake but failed to use "make distcheck" to create the package. Properly generated packages don't have this problem.

He's forced to parse this output because there is no real standard for the options to pass to configure. He wants to use GTK, but how does he do it? Is it "--with-gtk" or "--enable-gtk"?

It's "--with-gtk", since gtk is an external package. Sure, there are corner cases when it's hard to say if "with" or "enable" should be used, like support for GNOME that doesn't involve linking to GNOME libraries. Anyway, users doing customization are supposed to read what such customization will do.

Then he discovers that configure still thinks that the library is not there. He does a bit of investigation and discovers that he needs to delete config.cache.

This was fixed in Autoconf 2.5x many years ago. Using obsolete Autoconf is impolite towards users.

That's if he's lucky and the "cached" option isn't in some other random directory or file

This has never been a default. Developers shouldn't be doing this, and users should be responsible for unsafe customization of their systems.

The configure script now runs properly and outputs a nice Makefile, so Joe runs "make". To his surprise, configure, for no apparent reason, decides to run again.

Another case of not using "make distcheck".

While looking through the log messages, he happens to wonder if there is, perhaps, maybe, some way to not have to run the same 50 tests over and over and over.

Correct tests should cache results unless those tests are really fast (e.g. checking that a file exists).

To his amazement, while configure picked up the fact that he did indeed have lib-lzw-3.2.3.4 installed, it failed to realize that the header files were located in /usr/include/lzw, not in /usr/include

Proper configure script should find it at the configure time.

He finally finds it and edits it to the proper value. configure again decides to run for no apparent reason (even though "make" skipped it the last five times) and overwrites all this hard work

Editing Makefile is not a good idea. Variables should be overridden on the make command line. Of course, it should be the last resort. Normally, it's easier to re-run configure.

Now, Joe is presented with an interesting problem. He realizes that he needs to edit something besides the Makefile. But what does he look at? configure.in? Makefile.am? Makefile.in? Makefile.yoyo? Makefile.banana?

Or maybe read the manuals. Or maybe write to the mailing list or to the package developer. After all, it's the developers that should have read the manuals in the first place.

By now, the average user has done one of the following:

1. Given up and tried a different package.

2. Shot himself in the head with a twelve gauge shotgun.

Actually, reporting such problems to the developers works too. Maybe they'll be motivated to use Autoconf and Automake properly.

01 Mar 2006 06:14 RonKaminsky

Re: Autoconf/Automake not as bad it used properly

>
> The configure script runs for a while,
> then exits with an error which basically
> translates to "You have an autoconf
> version which is three weeks old; please
> upgrade"
>
>
> This is a clear sign that the package
> maintainer used Automake but failed to
> use "make distcheck" to create the
> package. Properly generated packages
> don't have this problem.

Oh? And that works every time even if the user unpacked the files using "tar -xmf yourWonderfulAutoPackage.tgz"?

Took me a long time to figure out that the auto* tools get totally confused in that case and tell me that I need to install the newest version of them, instead of also warning me that it could be because file mod times have changed. Ah, right, I forgot, I should have read all the documentation!

06 Apr 2006 22:42 proskin

Re: Autoconf/Automake not as bad it used properly
If it doesn't work for you, report it as a bug to the package maintainer. If the maintainer used "make distcheck", report the bug to Automake developers. They will welcome bug reports.

07 May 2006 13:01 multi_io

Re: Be fair

> Everyone who uses not Linux but e.g.
> Solaris, the autotools are a great
> thing.

More than once I've spent hours on Solaris 9 building automake-based projects only because the Makefile is absolutely incomprehensible and you have to do everything yourself if something didn't work. You run "make", it runs into some random error. You copy the command that caused the error to the command line and run it manually -- if that doesn't reproduce the error, you've lost. If it does -- hooray. You correct the mistake in the command line (that's only possible of course if this was some commonly used command like a compiler or linker call, and you know how to use those tools -- and if you do, you didn't really need auto*bloat in the 1st place). Ok, so the corrected command works and produces its output file. Great. You re-run "make", expecting it to not run this command again, since the output is already there. After all, this is "make", and "make" is more that a shell, it respects dependencies, right? Wrong. Virtually all Makefile these days are more or less degenerated, and automake-generated Makefiles are an especially bad joke. More often than not, it will happen that the make run *deletes* your manually created file and runs the erroneous command again. I don't know why it does this, but it does, trust me. If that happens, you've lost -- the Makefile is a mess, thousands of lines long and totally incomprehensible. You won't find your erroneous command in there unless you have three days
of your time to spare, or you're the author of automake.

Automake must die, the sooner the better. That's all.

24 Jun 2006 11:53 Tadu

Re: MPlayer core team likes to swear endlessly

> Obviously, the Ccide project at:

> www.sourceforge.net/pr...

> needs serious help.

Indeed. It uses hard coded absolute paths, and whoever

wrote it had never heard about recursive make.

30 Jul 2006 16:12 jasen_13

Re: hum

> It won't solve the problem, though,

> because then include files move,

> libraries move, etc. We would also need

> something like KDE's 'kde-config' script

> for every single package on the system.

something like pkg-config perhaps...

it works quite well here (linux), it's a bit of a pain

to get it working right for doing cross-compiles (MinGW)

Where working right requires it to completely ignore the native config data.

I haven't managed to get autoconf to work for cross compiling, scons can be convinced though.

23 Oct 2006 09:33 jcalcote

Missing features in scons
I've been working with autotools for about 4 years now - the first 3 years I didn't know what I was doing, and just tried to hack my way through when there was a problem. This last year, I decided to really understand it. It's been painful, but worth the effort. The really sad part is that it didn't have to be that painful. If there were a really good tutorial on not just the mechanics of autotools, but on the underlying motivation for it, and if all of this documentation were gathered into a single place, it would have been almost painless. Here's the basis for understanding autotools, step by step:

1. Learn what the intent behind autotools is

2. Learn how the toolchain works - what generates what?

3. Learn the underlying language (m4).

Now you have a half a chance at understanding what it does. Quit trying to think you can hack your way through autotools input files without understanding what they are for and how they work.

Before you decide to look at other tools, please try to compare apples to apples. Don't sit there and tell me that scons is a great replacement for autotools. For the things that scons does, it's a wonderful replacement. But if you need the additional functionality provided by autotools, then you just can't do it well in scons. What are these things? Mostly they have to do with package building, maintenance, and distribution.

I'm a packager for SuSE, as well as an open source software project administrator (for multiple projects). Scons is great if I want to build, but it does nothing to help me package and distribute my software. I've been on the mailing lists for scons for some time now, and I've commented on this missing feature set. One of the originators of scons (Steven Knight) has responded to my comments, and the crux of his responses are this: You're right, we need to add these features - why don't you start such a project and add them?

If you want to create a project for inhouse use, then by all means, use scons. If, however, you want to create a project to be packaged for distribution in a GNU/Linux distribution, you'd better use autotools, or be prepared to emulate all of the functionality that autotools gives you with a custom makefile. But don't even dream that a packager for a major distro is going to pick up your project and add it to a distro unless you've done just this. You must have support for all of the following targets in your custom makefile:

all dist distcheck install check installcheck

And all of these targets have to build cleanly on most *nix platforms in order to be considered as a candidate for packaging with a distro such as SuSE, RedHat, Ubuntu, or Debian.

Even now, there is a recent (may 2006) news posting on the scons web site - quote:

"Google's Summer of Code will fund a project proposed by Philipp Scholl to add support for packaging and release dependencies to SCons. Stefan Seefeld will be mentoring Philipp's proposal for the next several months. Congratulations to Philipp, and thanks to Stefan for mentoring."

Summary: Let's not be premature in trying to replace autotools with other build management tools - they're not ready yet - and don't forget, this article was written in 2003 - it's now 2006, and we STILL don't have a proper replacement tool chain for autotools. Perhaps soon...

20 Jul 2007 04:28 Vintermann

Re: Portability is always hard to achieve

> Developers should

> learn to use the autotools properly

> instead of agreeing with users who bash

> against them.

>

I think that if almost no one manage to make a "proper" autotools project, that says more about the project than the developers. If it was easy to learn to use the autotools properly, people would have done it, and every open source project of any size would pretty soon run into a guy who said "here, I'll fix your build scripts, they're too ugly right now". That just isn't happening...

12 Mar 2013 20:10 ilya239

>He again does the typical thing and runs "./configure --prefix=/opt". The configure script runs for a >while, then exits with an error which basically translates to "You have an autoconf version which is >three weeks old; please upgrade"

By design, configure scripts generated by autoconf do not require autoconf when run.
To avoid the above behavior, add
AM_MAINTAINER_MODE([disable])
to configure.ac

22 Mar 2013 06:39 igor2code

I've had the same problems before I sat down and wrote scconfig (repo.hu/projects/scconfig) - no m4 or other script dependency, no 50 layers, doesn't try to automatically generate 50000 lines of Makefile. It detects, then lets you do whatever you want with the result, be it generating a Makefile.inc with some variables you will include from your plain Makefile, config.h or a monolith Makefile.

Screenshot

Project Spotlight

Frosted

A passive Python syntax checker.

Screenshot

Project Spotlight

Lunzip

A decompressor for lzip files.