Tags: coding

Dwight

(no subject)

Ow, my head. I slept as usual this night but I still feel as I haven't at all. Perhaps it's because of chaotic dreams, or who knows?

What I do know is that I've been far too slow with my posts here. Programming can be very tempting; as I've told others, it lets you do anything on your computer as long as you have the knowledge so you know what to do and the endurance to pull it off. The only thing you have to do in order to get large projects done by your own is forget everything else you're doing... so here we are.

I hope you still recognize me, though -- haven't forgotten completely who I am :)

*winghugs all of you*
Dwight

(no subject)

Oops - I haven't written much here lately! But with all the balls to juggle, and me putting things off until later (that's easy..), and going stoatish/raveny at all the shiny things, it's easy to get distracted :)

And the programming game still has its hold on me.

So there's your explanation. I would write something more complex, but it's getting late here so I'll just have to keep that for later (again!). And Lhexa, I'll get to replying, eventually; I haven't forgotten the posts :)
tech contradiction

(no subject)

Seems my concentration has been pulled off the journal to other things again; first, finding a bug in a program[0], and then pondering about a problem which I call the weighted benchmark problem.

That one goes as follows: Say you have lots of tests against (for instance) a graphics card. The average score from the full suite of tests gives a fairly accurate indication of the performance of the graphics card. Now, the problem is, running all of the tests take too long time. So you can choose k of them. Now pick k tests so that the difference between the average score against those k and the difference between the entire test suite is minimized, for the graphics cards you have data about[1].
To make the tests more expressive, you're also permitted to assign each test a weight; the score is multiplied by this, and the average divided by the product of all of them. The idea here is that if, say, the performance of the card is heavily linked to how well it does on slightly different texture tests, you don't have to include all of them in the benchmark as long as they're not too different; just include one and set a high weighting to it.

I'm pretty sure this problem is hard, but there might be good approximations available. Without weighting, the problem resembles proportional representation (tests that represent the suite, compared to candidates that represent the people), which can be quite hard, itself.

[0] And what an odd bug it was! I think there's some nondeterminism going on, but it's not coded in my native programming language, so I'm not sure. What I do know is that it involves an expression where an assignment of the type "a = b * c" fails, but a = b; a *= c" succeeds.
[1] There are two "error metrics" on which this makes sense. Either the error is just the RMSE of the average from the test set and from the entire suite, or it's the worst error.
Dwight

(no subject)

I just can't seem to stop programming, can I? The last few days I've been making another vector quantization program. Hey, you already did that, you'll say.. but this one uses SSIM (rather than just using plain old RMS error) to measure how good it's at guessing. Lots of trickery is involved because of the costly Gaussian convolution needed by SSIM at each step.. and there are 32768 steps for every single VQ block so it adds up really quickly!

And this was after I told myself I should rest because that twelve thousand line project had been so demanding. Heh. In a month, I won't be able to understand a single line of that megaproject, just its ideas.

(So I guess I'm write-many-read-once. That's an odd combination.)

-

I think I'm seeing why overengineered economic/political systems are mostly pointless. It's not the initial condition that matters, its the dynamics. In other words, having a centrally managed tred give a completely accurate labor-value is of little concern if you have a decentralized, self-managing alternative that gives you a 99% correct labor-value; especially if the centralism of the "completely accurate" solution lead to some class aggregating power and turning it against the people.

Or - simple and robust beats brittle and sophisticated.

I also see what Raki meant by "the devil of theoretical economics is in the assumptions". The preconditions for market-based pareto optimum is a particularily good example, so much that it'd be amusing if it wasn't taken so seriously.
Dwight

(no subject)

Amusing g++ error of the day:

tool.cc:90: error: prototype for 'bool order::operator()(pointer, pointer)' does not match any in class 'order'
tool.cc:67: error: candidate is: bool order::operator()(pointer, pointer)
Dwight

(no subject)

I would write something about me and the weirdness going on, but that's about all I've been doing, so instead I'll say this.

My class-based reimplementation of the vector quantization program now works! Or at least, the rendering part of it does, which is what I programmed yesterday. Now I only have to add the optimization things and then start on the k-means clustering. Yays!

And if that sounded like something out of VOY, too bad! :) I guess.