Building a Linux Media Network, one step at a time

Wednesday, July 08, 2009

Android SwingWorker

I've been fooling around with the Android development platform. It's quite an adjustment, coming from the iPhone world. More thoughts to follow, although probably on a different blog. I'm getting sick of this blogger nonsense.

Anyways, I thought I'd post a simple equivalent to the SwingWorker class that works with Android's Event Dispatch Thread. Here it is:


package ca.razorwire.util;

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

import android.os.Handler;
import android.os.Looper;
import android.util.Log;

public abstract class UIWorker
{
private static final ExecutorService __execSvc;
private static final Handler __edtHandler;

private static final String TAG = "UIWorker";

static
{
__execSvc = Executors.newSingleThreadExecutor();
__edtHandler = new Handler( Looper.getMainLooper() );
}

public abstract void doInBackground();

public void done()
{
// Log.d(TAG, "done() executing in thread " + Thread.currentThread().getName() );
// This space intentionally left blank
}

public void execute()
{
// Log.d(TAG, "execute() executing in thread " + Thread.currentThread().getName() );
final Runnable doneRunner = new Runnable()
{
public void run()
{
UIWorker.this.done();
}
};

final Runnable bgRunner = new Runnable()
{
public void run()
{
// Log.d(TAG, "bgRunner executing in thread " + Thread.currentThread().getName() );
UIWorker.this.doInBackground();
__edtHandler.post( doneRunner );
}
};
__execSvc.submit(bgRunner);
}
}


And here's a sample usage:


UIWorker worker = new UIWorker()
{
public void doInBackground()
{
Log.d( TAG, "Sleeping for a bit." );
try { Thread.sleep( 10000 ); } catch ( Exception ignore ) {};
}
};
Log.d( TAG, "Executing worker..." );
worker.execute();
Log.d( TAG, "Returned from execute()" );


If you uncomment the Log calls in UIWorker, you should see some output like this:


07-08 12:14:11.015: DEBUG/rweeks(1078): Executing worker...
07-08 12:14:11.015: DEBUG/UIWorker(1078): execute() executing in threadmain
07-08 12:14:11.025: DEBUG/UIWorker(1078): bgRunner executing in thread pool-1-thread-1
07-08 12:14:11.035: DEBUG/rweeks(1078): Sleeping for a bit.
07-08 12:14:11.035: DEBUG/rweeks(1078): Returned from execute()
07-08 12:14:21.038: DEBUG/UIWorker(1078): done() executing in thread main


I'm 100% sure that Blogger is going to screw up the formatting.

Public domain code, no guarantees, seems to work.

Monday, January 28, 2008

Google/Flickr Image Scraper

I just received my XO laptop through the Give-1-Get-1 program. I don't have a lot of plans for it yet but one thing I want to do is set up some photo albums for my 3-year-old son. Interests include: helicopters, airplanes, trucks, trains. The usual!

I wrote a small app to grab pictures from Google Images based on a query string. It was a short exercise in concurrent programming, more than anything else. Lesson learned: the Java 1.5 concurrency APIs don't make the producer/consumer design pattern as easy as you might think, particularly when it comes to producer shutdown.

After wrapping up the Google images scraper my friend Gwilli pointed out that Flickr is a much better resource for this sort of thing. D'oh! Fortunately it was a very small change to scrape their database, too.

Instructions:
  1. Make sure you have Java 1.5 or Java 6. Download googlor.jar.
  2. Run java -jar googlor.jar (in OS X, just double-click the jar file)
  3. The fields are pretty straightforward. By default images will go into the images/ subdirectory of the current working directory.
  4. When images are presented, press 'j' to junk them or 'k' to keep them. That's it.
I'm happy to make the source available to anyone who's interested.

(I should point out, the images grabbed from Flickr are low-res, maximum 500x500. This is a good fit for the XO's screen)

Thursday, November 22, 2007

WideFinder in Java6: revision3

Revision 3 of my WideFinder implementation is available here. I see a small performance gain when I stagger the thread starts by 50ms. I've tweaked the processing of the memory mapped "chunk" boundaries. Now it's 99.9999998% correct :(

Tuesday, November 13, 2007

WideFinder/Java6 Rev2

... The sum total of the next 6 paragraphs is, "I tested a theory without collecting performance data and my changes seemed to have no impact on performance"... Skip to the DTrace Analysis section if that doesn't sound like a fun read...

In the initial revision of the WideFinder, I couldn't get away from the thought that I was handling the data one too many times. If you look at WideFinder.run (WideFinder.java:57),
  byte[] seq = new byte[ seek - _mbb.position() ];
_mbb.get( seq );
nextLine = new String( seq, LOG_CHARSET );

We're grabbing bytes out of the buffer, sticking them into a fresh byte array, and creating a new String object based on those bytes, using a specific charset. This is because the character set used by CharBuffer treats each byte as half of a 16-byte char, which means that the bytes are interpreted as non-English characters.

Creating new byte[]s and new Strings like that, in a pretty tight loop, is just asking for performance trouble. I wanted to get to the point where the regex could be applied straight to the ByteBuffer. Looking at the implementation of Charset.encode, it seemed like it suffered from the same sort of space-inefficiency as my original approach.

The solution was to wrap the ByteBuffer in a trivial implementation of CharSequence that indexed directly into the current line of the buffer. The only gotcha was to use the mark() and reset() methods of the buffer in the implementation of subSequence.

Unfortunately, my efforts produced only a minimal increase in performance (~2%) on either my single-core Athlon or the T2K. So much for "low-hanging fruit".

DTrace Analysis

I created a new revision of the Java 6 WideFinder with some tweaks to the ByteBuffer implementation. It's available here: wf_02.tar.gz. It turned out to be a few hours' effort for a very minimal performance gain.

With that little misadventure under my belt I decided to run the WideFinder through DTrace to see where the bulk of time was being spent. DTrace is Sun's dynamic tracing tool, it allows you to inspect a bunch of different aspects of a program's performance as it is running. If you've never used it, you should really give it a try. It's pretty amazing, the stuff you get visibility into. I gather it's also available on OS X as the "Instruments" app.

This is the output of running the method-timings.d script against the WideFinder on a 100-line log file (produced by the DataGenerator). Times given are in microseconds and represent the total time spent in methods in classes of the given package:
sun/net/www/protocol/file         103
sun/net/www 266
sun/net/www/protocol/jar 375
java/nio/channels 474
java/util/concurrent/atomic 791
sun/security/action 1065
java/util/jar 1731
java/util/zip 3146
java/net 3654
java/security 3807
java/lang/ref 5319
java/util/concurrent 7419
java/util/concurrent/locks 8329
java/lang/reflect 10365
sun/misc 20678
java/nio/channels/spi 25753
sun/nio/ch 32353
java/nio/charset 160325
java/util 197794
sun/nio/cs 204016
java/io 240720
sun/reflect 293949
wf 464158
java/lang 1061836
java/nio 1596102
java/util/regex 2738455

The absolute values are not as meaningful as the relative values. The act of observing the program's performance has skewed the measurements (where's that damned cat!), but we hope that it's skewed the measurements equally for each package. From these data, we can see that the "big spenders" are java.util.regex (~39% of total time), java.nio (~25% of total time, with subpackages), and java.lang (~15% of total time).

Here are the 10 most expensive methods in java.nio (not including subpackages), also measured in microseconds:
CharBuffer.arrayOffset     43204
CharBuffer.<init> 44219
ByteBuffer.arrayOffset 51038
HeapCharBuffer.<init> 62420
CharBuffer.wrap 122109
Buffer.position 280288
Buffer.limit 286871
Buffer.checkIndex 375539
DirectByteBuffer.ix 485376
DirectByteBuffer.get 1084831

And here are the 10 most expensive methods in java.util.regex (although there seem to be a lot of very expensive methods in this package):
Pattern.escape                   165501
Pattern$CharProperty.<init> 180954
Pattern.peek 184470
Pattern$8.isSatisfiedBy 199079
Pattern.isSupplementary 238639
Pattern.atom 240097
Pattern$Slice.match 261159
Matcher.getSubSequence 282353
Matcher.group 291688
Pattern$BmpCharProperty.<init> 291799
Pattern$BmpCharProperty.match 1080132
Looks like, if there's low-hanging fruit, it's in the regex processing. A little regex-optimization may go a long way.

Data Processing and the Gravel Biz

My Dad's in the gravel business. Ever watch The Flintstones? He's like Mr. Slate. His job is to get as much sand and gravel as possible out of a mountain and onto barges. From there, it floats down a river to a depot where (presumably) people are willing to pay for it.

The trick about the gravel business, as with any other commodity industry I guess, is that you pretty much live or die based on 2 factors:
  • How often you touch the product.
  • How much it costs you each time you touch it.

In my Dad's case, they can't drive loaded trucks down the steep hill of their quarry to get to the river bank. The road is too narrow to allow the trucks to pass each other. So this is what they do:
  • A dumptruck pulls up to a big exacavator, which is scraping away at the side of the mountain pretty much non-stop.
  • The excavator fills the dump truck. Takes somewhere around 5-6 scoops, I think.
  • The dump truck backs into position at the top of the cliff and waits for the all-clear to dump its load. You don't want to rush that task or you wind up with a lot of expensive metal at the bottom of a cliff.
  • While that dump truck is getting ready to dump its load, another dump truck (from the pool, see where I'm going with this?) pulls up to the excavator and begins to receive its load.

It's not a perfect setup: there is a finite amount of room on this plateau where the excavation takes place, you can't fit an unlimited number of dumptrucks in there. Sometimes a dumptruck is forced to wait while the excavator fills up another truck. We would not call this problem "embarassingly parallel" but there is definitely a producer-consumer pattern here.

But a similar pattern plays itself out at the bottom of the cliff: loaders scoop up the dumped gravel and deposit it on a conveyor belt, where it is fed into a crusher (from there into another gravel pile, and from there onto a conveyor belt/barge, and from there to the sales facility, where another loader unloads the barge. All told, I think they handle the product 4 times).

The bottleneck here, of course, is the road. At some point as the production capacity up on the plateau expands, the capital and operational expense of widening the road will become less than the cost of handling all that sand and gravel one extra time.

What I find interesting about this problem is that the cost of handing a piece of gravel is infinitesimally small. But when you multiply that cost by several trillion, it adds up to real dollars and cents. And so it is with the Wide Finder. Handling a single byte's worth of data, or a line's worth of data, is so "cheap" we hardly ever think about it. But handling a Gb's worth of data, or 10 million lines worth, now you're talking real money. Because processing time, especially in a batch environment like this, is money.

Meh, some neat parallels there, is all I'm saying.

Friday, November 09, 2007

WideFinder/Java6

Tim Bray has recently published a series of articles about a project he calls Wide Finder. It pretty much amounts to a multi-threaded text processor for large files. The idea is to implement the requirements in a variety of languages and determine the strengths and weaknesses of each implementation in a highly-parallelized environment.

Here's my stab at it: WideFinder in Java 6. Unfortunately, blogger is a really crappy way to publish this sort of thing, but here goes...

The idea is to treat the log file as a random-access file and use Java's NIO API to memory-map one chunk of it per worker thread. The file is processed on a line-by-line basis, but the chunks are split up based roughly on the file size / number of workers. This means that each worker's "chunk" probably doesn't begin or end on a line break. These edge cases (literally, the edges of the buffer) are resolved after all the worker threads have completed.

The initial implementation is really straight-forward. My first approach used ByteBuffer.asCharBuffer to treat the memory mapped chunk as character data. The problem was that 2 ASCII characters were getting packed into each 16-bit Java char, which meant that all the file data was appearing as CJK characters. It's totally sensible that Java would do this, but I didn't see a quick way around it. I'm going to take a closer look, though, because I think that the current implementation handles the data more than it needs to.

Timing Data
  • The source file is a 4,000,000 line (178Mb) file where each line matches the regex specified by Tim in the Wide Finder article (see the source for wf.DataGenerator).
  • Times are given as "number of workers"x"elapsed (wall) time in seconds"
  • Java VM is 1.6.0
  • VM arguments are -Xmx1024M -Xms1024M.

AMD Athlon 64 1x2.2Ghz, 2.5Gb RAM, IO is nothing special (SATA something): 1x13.2 2x12.3 4x12.6
Sun T2000 24x1Ghz 8Gb RAM, IO is nothing special (stock 80Gb): 1x142.9 2x53.0 4x28.2 24x9.7
Intel Xeon 4x2.8Ghz, 3Gb RAM, some kind of SCSI I/O: 1x14.6 2x8.67 4x8.49

My 2Ghz Core 2 Duo MacBook had to be excluded from this test because the code is currently dependent on Java 6... very frustrating.

Instructions to Run

Once the code has been compiled, the command is:

java -cp <output directory> wf.WideFinder <log file> <regex> [num-workers]

Where "output directory" is where the classes were compiled to, "log file" is the path to the log file, "regex" is the regex to search for.  In this case, regex=='GET /ongoing/When/\d\d\dx/(\d\d\d\d/\d\d/\d\d/[^ .]+)'.  If "num-workers" is not specified, the value returned by Runtime.getRuntime().availableProcessors() is used.

Next on the to-do list
  • Figure out how to use the NIO CharBuffer with US-ASCII encoding... or at least something that lets me deal with the characters as something other than CJK chars. I confess that I'm frightfully ignorant of Unicode and character encodings. About once a year I think, "I gotta learn that stuff..." so I read up on it, and I get to the point where I think I understand it, but I never seem to apply it in my day job, so it's quickly forgotten.
  • Run this bad boy through DTrace and see where the hot spots are... just a guess, I reckon Pattern.matches is burning up most of the CPU time.
  • Some low-hanging fruit is probably to do a plaintext match on the initial, constant portion of the regex... ideally take advantage of the 64-bit architecture and compare the first 8 characters of the line all at once!
  • Oh, yeah, also I should get around to ensuring that the program is actually correct... it's definitely at the point where it's useful to gather performance data but I think there may be a couple corner cases that it'd puke on.
  • Back-port to Java 5 so I can develop/test on my MacBook (low priority)
... Updated 11/9 21:17, removed inline source, Blogger was having a hissy fit about it.



... Updated 11/9 21:41, just realized that to run this against actual log data, you'd probably have to change m.matches to m.find at line 76 of WideFinder.java.  That will skew the performance numbers, you may be able to optimize the regex by putting a caret at the beginning.

Friday, July 27, 2007

Comparing Java Performance on Multi-Core CPUs

The tests in this article measure fixed-point Java performance on a variety of CPU architectures. In summary: Java can take advantage of multiple cores to avoid CPU contention, but in some cases not as well as you'd expect.
I tested on 4 hardware configurations:

  • Athlon 64 3500+: 1 CPU, 1 Core, 2.2Ghz. Running Linux kernel version 2.6.13, Java 1.6.0_02. This was used as a baseline.

  • MacBook Core 2 Duo: 1 CPU, 2 Cores, 2.2Ghz. Running OS X kernel version 8.10.1, Java 1.6.0 (b88)

  • Dual Opteron 248: 2 CPUs, 1 Core each, 2.2Ghz/core. Running Linux kernel version 2.6.13, Java 1.6.0_02

  • Sun T2000: 1 CPU, 6 cores, h/w support for 4 threads per core, Running SunOS kernel version 5.10


This is the most trivial test class I could come up with. It's actually more complex than I thought it would be. All it does is synchronize 1 or more threads to calculate a lot of large prime numbers at the same time. This task is designed to provide high CPU contention and low IO/memory contention.

package ThreadTester;

import java.math.BigInteger;
import java.util.concurrent.Executors;
import java.util.concurrent.CyclicBarrier;
import java.util.concurrent.ExecutorService;

public class ThreadTester
{
private static final long START = Long.MAX_VALUE;
private static final int NUM_PRIMES = 4000;

private final CyclicBarrier _start;
private final CyclicBarrier _finish;
private final int _numThreads;

public ThreadTester( int numThreads )
{
_numThreads = numThreads;
_start = new CyclicBarrier( _numThreads, new Runnable()
{
public void run()
{
System.out.print( _numThreads + " " + System.currentTimeMillis() + " " );
}
});
_finish = new CyclicBarrier( _numThreads, new Runnable()
{
public void run()
{
System.out.println( System.currentTimeMillis() );
}
});
}
public static void main( String[] args )
{
int numThreads = Integer.parseInt( args[ 0 ] );
new ThreadTester(numThreads).go();
}

private void go()
{
for ( int i = 0; i < _numThreads; i++ )
{
new Thread( new PrimeFinder() ).start();
}
}

private class PrimeFinder implements Runnable
{
private BigInteger _bigNum = BigInteger.valueOf( START );

public void run()
{
try
{
_start.await();
for ( int i = 0; i < NUM_PRIMES; i++ )
{
_bigNum = _bigNum.nextProbablePrime();
}
}
catch ( Exception ignore ) {}
finally
{
try
{
_finish.await();
}
catch ( Exception ignore ) {};
}
}
}
}

The raw data follows. The timing data from different architectures should not be compared to each other. In each graph, as the Y value (Time in Seconds) begins to grow linearly with X (Number of Threads), CPU contention among the threads is increasing.
The Sun T2000 server clearly exhibits the best thread utilization. This is unsurprising given the number of independent execution units available in the Niagara processor. Note that had this test involved floating-point math, contention for the Niagara's single FPU among its 24 execution units would be intense.





Athlon-64
1 6.81 1185600041009 1185600047814
2 13.58 1185600047914 1185600061492
3 20.3 1185600061558 1185600081853
4 26.9 1185600081925 1185600108820
5 33.34 1185600108906 1185600142247
6 40.13 1185600142336 1185600182463
7 46.84 1185600182546 1185600229390
8 54.85 1185600229477 1185600284327
9 60.45 1185600284394 1185600344845
10 65.59 1185600344954 1185600410545

MacBook
1 7.05 1185596894768 1185596901822
2 11.34 1185596902131 1185596913471
3 17.89 1185596914281 1185596932169
4 25.19 1185596932657 1185596957846
5 31.64 1185596958512 1185596990148
6 37.19 1185596990764 1185597027957
7 42.94 1185597028423 1185597071366
8 48.03 1185597071971 1185597120005
9 54.92 1185597120659 1185597175582
10 59.95 1185597175905 1185597235859

248
1 6.16 1185598190089 1185598196249
2 8.32 1185598196398 1185598204713
3 15.57 1185598204849 1185598220423
4 21.08 1185598220564 1185598241640
5 25.96 1185598241766 1185598267726
6 30.86 1185598267868 1185598298726
7 35.15 1185598298870 1185598334015
8 41.11 1185598334145 1185598375253
9 45.52 1185598375373 1185598420891
10 53.08 1185598421038 1185598474117

T2K
1 31.2 1185597917415 1185597948610
2 31.47 1185597949296 1185597980763
3 32.47 1185597981545 1185598014017
4 34 1185598014867 1185598048869
5 32.84 1185598049745 1185598082581
6 34.11 1185598083555 1185598117668
7 35.38 1185598118647 1185598154031
8 37.43 1185598154995 1185598192425
9 38.97 1185598193408 1185598232373
10 40.06 1185598233337 1185598273396

Friday, June 16, 2006

SuSE 10.0: MP3 Support in K3B for Beginners

Hey! In the words of the immortal Jim Anchower, I know it's been a while since I rapped at ya. This entry is going to take a bit of a detour from the Media Center setup. Instead I'll talk about building functionality into your Linux OS based on a combination of package management and good-old-fashioned building from open source.

This article is targeted more towards open-source beginners... if you've been following along with the rest of the stuff on this site, you've probably already progressed beyond this point. In any event, I welcome your comments.

SuSE Linux, by default, does not ship with MP3 support for many of its applications. I'm not sure if this is because they would be forced to license such technology from the patent-holders of the MP3 encoding algorithm, or if the RIAA prevents it somehow. Anyways, it's a pain in the ass. For instance, K3B, the popular CD-burning software for the K Desktop Environment (KDE), is pretty much crippled for making audio CDs. Fortunately, it's pretty simple to rebuild K3B and include MP3 support.

I shouldn't say simple. The process itself is actually fairly complex, with dozens of different software products interacting with each other. But the trick is to use the right tools to make it simple.

Every Linux distribution these days, as far as I know, comes with some sort of package management software. This tool's job is to resolve all these interactions and interdependencies between various software packages. In SuSE, the distro I'll be covering here, this tool is called YAST. In Debian or a debian-based distro such as Ubuntu, it's called apt-get. In GenToo I believe it's called emerge. Point is, whatever distribution you've chosen, there will almost certainly be a tool available to help you with this process.

I should say right up front: if your distro hasn't got one, or you can't find it, stop now. In the time it takes you to resolve all the dependencies for building K3B you can install Ubuntu and never have to worry about this again.

SuSE 10.0 Specific Note


You can make these package installs go a lot faster if you (a) have a copy of the original installation media and (b) have some hard drive space to spare. What I did was create a directory, /suse10dvd/CD1, and copied the entire file/directory structure from my SuSE DVD into that directory. Then, in YAST, you create a new installation source from the directory /suse10dvd (it will automatically look in CD1 for all the requisite files).

Setup the development tools


Eventually, all the package management in the world is only going to get us so far. When we reach that point we're going to need to take source code and turn it into a binary executable. And to do that we will need some tools. Fortunately, these tools can be installed by - you guessed it - the package management system. Again, I will be assuming you're using SuSE 10.0 here. As root, fire up YAST, and click on Software Management (might as well keep that screen open for a while).

Install the following packages: gcc g++ make automake autoconf. Accept any dependencies YAST points out.

Setup K3B source prerequisites


All these packages are also installable via YAST. Go ahead and install them now: xorg-x11-devel zlib-devel qt-devel libjpeg-devel kdebase3-devel taglib-devel libmusicbrainz-devel

If you notice that you're installing one of these "devel" packages, but the corresponding package is not selected (ie. libjpeg for libjpeg-devel), go ahead and select that too. That's a lot of stuff, and it'll probably take a while to install. While it's working you can download some of the stuff we'll need that SuSE can't provide.

Getting the source prerequisites


You'll need to get the source code for libmad. This provides K3B with the ability to decode MPEG audio (ie. what we commonly refer to as an mp3 file). Get it here. Use SourceForge or their FTP site, doesn't matter. You don't need the id3tag or madplay stuff, just the libmad download.

You can build a more complete K3B by downloading LAME from lame.sourceforge.net. LAME (Lame Ain't an MP3 Encoder) allows K3B and other tools to encode audio into MP3 format. This isn't necessary for what we want to do, but it will add valuable functionality to your SuSE installation.

These source packages come in a .tar.gz format - commonly known as a tarball. Once you've downloaded them to a convenient spot on your hard disk you can extract them with the "tar xvfz <source package>.tar.gz" command.

Extracting one of these tarballs will create a directory holding the source code, ie. lame-3.97. Once you're in this directory you can build the executable from the source code using these three commands - they're the same for pretty much any open source software package:

  1. ./configure

  2. make

  3. (as root) make install


The first command ensures that all the necessary dependencies are available and sets up the eventual install paths. The second command actually takes the source code and turns it into binary files - executable files, code libraries, etc. The third command takes those output files and puts them in the proper locations so that other programs (ie. k3b) can find them.

I should note that the configure script takes a lot of options. You can see some of them by running ./configure --help. Unless you see something that really jumps out at you, you should accept the defaults for this project.

Make sure that you run the last command, make install, as either the root user or someone with root-like permissions. Otherwise you probably won't be able to put the generated files in the default spots.

Building K3B


Is basically a non-event. Download the K3B source tarball from the K3B homepage. Extract it just like you did with the prerequisites (note that if the file ends in a .tar.bz2 suffix, you should use the command tar xvfj <filename>.tar.bz2 rather than tar xvfz. Go into the newly-created directory and run the same ./configure,make,make install commands as above.

Note that when you run the configure script, it will show you which extra features will be built into K3B. You should see something like this:

K3b Configure results:
------------------------------------------
Ogg Vorbis support: yes

Mp3 decoding support (libmad): yes

Audio meta data reading with Taglib: yes

libsndfile audio decoding support: yes

FLAC support: no
You are missing the FLAC++ headers and libraries.
The FLAC decoding plugin won't be compiled.

Musepack support: no
You are missing the Musepack headers and libraries >= 1.1.
The Musepack audio decoding plugin won't be compiled.

Lame Mp3 encoder plugin: yes

Audio resampling:
using version bundled with K3b

FFMpeg decoder plugin (decodes wma and others):
no
You are missing the ffmpeg headers and libraries
version 0.4.9 or higher.
The ffmpeg audio decoding plugin (decodes wma and
others) won't be compiled.

Resmgr support: yes

Audioplayer available (aRts) yes

Compile K3bSetup 2: yes

Tag guessing using MusicBrainz no
You are missing the musicbrainz headers and libraries.
K3b will be compiled without support for tag guessing.

Compile HAL support no
You are missing the HAL >= 0.4 headers and libraries
or the DBus Qt bindings.


Good - your configure finished. Start make now


Your new version of K3B will by default be installed right overtop of the old version. This is convenient, but you must remember now that the package manager (YAST) doesn't know that you've swapped in this new version, so any updates that SuSE may try to apply will probably blow away your changes. Careful!

If you'd rather put K3B in a different place, use the --prefix argument to configure. Again, configure --help has more information.

That should be it! If you can drag MP3 files into your Audio CD project, you'll know everything's working. Double check via the Help Menu that the version of K3B that you're running corresponds to the version that you just built - you may be running the old executable.

If you have any trouble, Google is your friend. If you're still in trouble, my contact information is on the sidebar to the right of this page.

Good luck!