Tags: the technocracy of the neats
(no subject)
And since we're on the topic of nights of polar darkness, here is something I made a week ago.. or so. I thought I would make something for the "I don't know if I'm joking or serious, probably both" hypothetical technocracy plan[1], but the picture turned out too Imperial, you get what I mean?
[1] It had personal magnetic levitation cars with computer routing to always find the shortest path, fission as the intermediate solution to power problems and fusion to the long term, ballistic linear acceleratordriven pods instead of aircraft, "pretend money" hybrid prediction markets to guide policy, and direct calculation of ordinary economic prices. Sounds "efficient" enough?
[1] It had personal magnetic levitation cars with computer routing to always find the shortest path, fission as the intermediate solution to power problems and fusion to the long term, ballistic linear acceleratordriven pods instead of aircraft, "pretend money" hybrid prediction markets to guide policy, and direct calculation of ordinary economic prices. Sounds "efficient" enough?
(no subject)
My last entry about a possible solution to the council democracy "gerrymander" problem led me right onto the old "technocracy of the Neats" track, which further made me think about borders as a function of nearestneighbor (Voronoi diagrams) on a shortestpath metric.
So I've been optimizing this Dijkstra function to find out the shortest path from a single source (a geographical center) in order to figure out which parts of the land would belong to a given "nation". The way I do this is I run a shortest path from each of the n geographical centers to produce n 2D vectors of p*q, then assign each point to the center that has the minimal distance. Thus, "nearest neighbor".
After a while, I found out that there's no reason to calculate the shortest path for the entire world (all p*q pixels), since if the shortest path to some point f is greater than the current record, there's no way any nodes connected to it can be lesser than the current record (because we could do better by just using whatever center the current record is based on and then go towards the destination instead).
Say there are two centers to calculate the shortest paths to. Then the first center need to iterate through all the points but the second can stop early. So we save about half of the calculation on the second center.
But hold on a minute. If it's possible to break early in this way, wouldn't it then be possible to just run the modified Dijkstra once instead of n times? The idea would be to add the initial nodes of *all* the geographical centers, using a struct that contains the center number in addition to the coordinates and then run as usual. Since they all start at (source.x, source.y) with a path length of 0, and the heap I'm using automatically orders by least path length first, it would seem that the shortest paths of each of the centers would "spread outwards" and limit each others' borders. But it can't be that easy, can it?
Am I missing something? I haven't actually implemented this version yet.
So I've been optimizing this Dijkstra function to find out the shortest path from a single source (a geographical center) in order to figure out which parts of the land would belong to a given "nation". The way I do this is I run a shortest path from each of the n geographical centers to produce n 2D vectors of p*q, then assign each point to the center that has the minimal distance. Thus, "nearest neighbor".
After a while, I found out that there's no reason to calculate the shortest path for the entire world (all p*q pixels), since if the shortest path to some point f is greater than the current record, there's no way any nodes connected to it can be lesser than the current record (because we could do better by just using whatever center the current record is based on and then go towards the destination instead).
Say there are two centers to calculate the shortest paths to. Then the first center need to iterate through all the points but the second can stop early. So we save about half of the calculation on the second center.
But hold on a minute. If it's possible to break early in this way, wouldn't it then be possible to just run the modified Dijkstra once instead of n times? The idea would be to add the initial nodes of *all* the geographical centers, using a struct that contains the center number in addition to the coordinates and then run as usual. Since they all start at (source.x, source.y) with a path length of 0, and the heap I'm using automatically orders by least path length first, it would seem that the shortest paths of each of the centers would "spread outwards" and limit each others' borders. But it can't be that easy, can it?
Am I missing something? I haven't actually implemented this version yet.
(no subject)
Random pondering: A democracy based on a council of councils[1] can be subverted by a minority equal to (slightly above) 0.5^n where there are n layers of councils. One way to solve this is to have a greater margin for changing the status quo; if the margin is 60% for instance, the minority is 0.6^n. However, as n goes large, this still becomes very small, and a substantially smaller minority can then rig the system in favor of the status quo.
A "simple" solution would be to weigh the votes of each council member according to the confidence given by the lower councils that elected them. For instance, if one member was approved by two thirds of the council below it and another by only half plus one, then the first member would have a stronger say than the second in electing the members for the council above. This would, at least on the surface, seem to solve the problem because the rounding errors don't accumulate. But handling weighed votes on every decision (as a council wouldn't just be choosing which delegates to elect for the councils above, but also make decisions on the local level) would become very complicated.
So we could have a compromise where council members have a weighed vote only in the matter of electing higher council members (where obviously the weighed votes would be accumulated upwards), but otherwise have power according to a simple "one man one vote" rule. How easy can one subvert this "compromise system"? Intuition says (0.5+1/k)^2 for council size k is sufficient to do it without PR, somewhat more with PR. But I could be wrong.
In any case, I have a snowflake's chance of actually seeing a governmental system arranged in this fashion, so this is all theoretical. But theory can be interesting, too!
(... he says while drawing a Voronoi diagram on top of a world map to figure out "technocratic country borders")
[1] Such a theoretical democracy works this way: The bottom layer (which is a group of people) elects a number of representatives ("delegates") to the first council. The first council elects a number of representatives to the second, and the second to the third and so on. The point is that the bottom layer is close to the people, while the entire system scales up very quickly, at least theoretically. (Bonus points goes to whoever can figure out how representative this system is wrt parliamentary systems if governmental representation is an instance of set covering/VQ where the sets/vectors are issue bundles.)
A "simple" solution would be to weigh the votes of each council member according to the confidence given by the lower councils that elected them. For instance, if one member was approved by two thirds of the council below it and another by only half plus one, then the first member would have a stronger say than the second in electing the members for the council above. This would, at least on the surface, seem to solve the problem because the rounding errors don't accumulate. But handling weighed votes on every decision (as a council wouldn't just be choosing which delegates to elect for the councils above, but also make decisions on the local level) would become very complicated.
So we could have a compromise where council members have a weighed vote only in the matter of electing higher council members (where obviously the weighed votes would be accumulated upwards), but otherwise have power according to a simple "one man one vote" rule. How easy can one subvert this "compromise system"? Intuition says (0.5+1/k)^2 for council size k is sufficient to do it without PR, somewhat more with PR. But I could be wrong.
In any case, I have a snowflake's chance of actually seeing a governmental system arranged in this fashion, so this is all theoretical. But theory can be interesting, too!
(... he says while drawing a Voronoi diagram on top of a world map to figure out "technocratic country borders")
[1] Such a theoretical democracy works this way: The bottom layer (which is a group of people) elects a number of representatives ("delegates") to the first council. The first council elects a number of representatives to the second, and the second to the third and so on. The point is that the bottom layer is close to the people, while the entire system scales up very quickly, at least theoretically. (Bonus points goes to whoever can figure out how representative this system is wrt parliamentary systems if governmental representation is an instance of set covering/VQ where the sets/vectors are issue bundles.)
(no subject)
Random thought: What's the logical extreme of daylight savings time? A time system where a certain time is fixed to sunrise, rather than noon. Totally impractical, of course[1].
But that's how my thoughts go.
[1] I leave the engineering details of implementing vertical "time zones" (to account for nearpolar days) up to the reader.
But that's how my thoughts go.
[1] I leave the engineering details of implementing vertical "time zones" (to account for nearpolar days) up to the reader.
(no subject)
Here's another example of the difficulty being hidden in the assumptions: Coasian externality solution. Say a process, while causing reward to one, causes misery to another (and this one is not involved in the process), then we have an externality (like pollution).
Now, in economics, there's a theorem saying that if certain conditions hold true, all you need to make all happy is property rights. However, the conditions make up the assumptions, and they're very unrealistic. For the simple example (doesn't matter who gets what property), negotiation must be free (so everyone has to have lots of time), it must be easy to find out who's causing the misery (the side effect must be tracable), the market must be perfect (complete information and perfect competition), and people have to be perfectly rational. Even the complex case (which deals with transaction cost) contain all the other conditions besides "negotiations must be free".
Oh, well, enough knocking down abstractions. Hey, I make abstractions myself, so...
Now I'm just waiting for the new computer parts to get here so I don't have to juggle my data around an 80 GB disk with 100 megabytes or so left. Then, since I didn't have any more proper disk connectors, I had to get a new MB, and then a new CPU to fit with it.. so it'll be quite a bit faster too, not just larger. Hopefully I have enough physical space in the case to fit the disk in there.
Now, in economics, there's a theorem saying that if certain conditions hold true, all you need to make all happy is property rights. However, the conditions make up the assumptions, and they're very unrealistic. For the simple example (doesn't matter who gets what property), negotiation must be free (so everyone has to have lots of time), it must be easy to find out who's causing the misery (the side effect must be tracable), the market must be perfect (complete information and perfect competition), and people have to be perfectly rational. Even the complex case (which deals with transaction cost) contain all the other conditions besides "negotiations must be free".
Oh, well, enough knocking down abstractions. Hey, I make abstractions myself, so...
Now I'm just waiting for the new computer parts to get here so I don't have to juggle my data around an 80 GB disk with 100 megabytes or so left. Then, since I didn't have any more proper disk connectors, I had to get a new MB, and then a new CPU to fit with it.. so it'll be quite a bit faster too, not just larger. Hopefully I have enough physical space in the case to fit the disk in there.
(no subject)
I just can't seem to stop programming, can I? The last few days I've been making another vector quantization program. Hey, you already did that, you'll say.. but this one uses SSIM (rather than just using plain old RMS error) to measure how good it's at guessing. Lots of trickery is involved because of the costly Gaussian convolution needed by SSIM at each step.. and there are 32768 steps for every single VQ block so it adds up really quickly!
And this was after I told myself I should rest because that twelve thousand line project had been so demanding. Heh. In a month, I won't be able to understand a single line of that megaproject, just its ideas.
(So I guess I'm writemanyreadonce. That's an odd combination.)

I think I'm seeing why overengineered economic/political systems are mostly pointless. It's not the initial condition that matters, its the dynamics. In other words, having a centrally managed tred give a completely accurate laborvalue is of little concern if you have a decentralized, selfmanaging alternative that gives you a 99% correct laborvalue; especially if the centralism of the "completely accurate" solution lead to some class aggregating power and turning it against the people.
Or  simple and robust beats brittle and sophisticated.
I also see what Raki meant by "the devil of theoretical economics is in the assumptions". The preconditions for marketbased pareto optimum is a particularily good example, so much that it'd be amusing if it wasn't taken so seriously.
And this was after I told myself I should rest because that twelve thousand line project had been so demanding. Heh. In a month, I won't be able to understand a single line of that megaproject, just its ideas.
(So I guess I'm writemanyreadonce. That's an odd combination.)

I think I'm seeing why overengineered economic/political systems are mostly pointless. It's not the initial condition that matters, its the dynamics. In other words, having a centrally managed tred give a completely accurate laborvalue is of little concern if you have a decentralized, selfmanaging alternative that gives you a 99% correct laborvalue; especially if the centralism of the "completely accurate" solution lead to some class aggregating power and turning it against the people.
Or  simple and robust beats brittle and sophisticated.
I also see what Raki meant by "the devil of theoretical economics is in the assumptions". The preconditions for marketbased pareto optimum is a particularily good example, so much that it'd be amusing if it wasn't taken so seriously.
(no subject)
Four days since I last posted anything here? It feels like two.. odd how the days pass, but then this isn't the first time the past have been playing tricks on me like that. Some things that are years away seem like they were there yesterday, while other recent things feel like they've happened years ago.
So what did I do the last {two/four} days? For one, I found that trying to make English more "efficient" by encoding its words into a 27 letter (branch factor) Huffman tree doesn't work. I was trying to find a wordlist/transformation that would fit with this "clear cut concise" auxiliary language of the second of my critters (I really need to find a name for them, not just "RfS"), as their natural language is supposed to be extremely complicated and thus they couldn't speak to us (in their scenario) that way.
( Collapse )
Bleh. Oh well; seems I can't stay away from Magnitogorsk even when I try.
What more did I do? On a similar vein, I tried making "letters" that would fit the "standardized and optimized communications method" air I was trying to convey. Not written by hand, but by computer. Here I got something to show for my day's work  I just chose permutations of {circle, triangle, rectangle}, three of each for a total of 3^3 = 27 glyphs. (See why I picked 27 above?)
It looks like this. Again, detail.. though this plays more importance in a variation of the scenario, where it's used for name tags of sorts.
And now I'm "drifting" again  that really odd sensation where it's as if I'm in front of my head. So this'll have to do for now :) Comment!
So what did I do the last {two/four} days? For one, I found that trying to make English more "efficient" by encoding its words into a 27 letter (branch factor) Huffman tree doesn't work. I was trying to find a wordlist/transformation that would fit with this "clear cut concise" auxiliary language of the second of my critters (I really need to find a name for them, not just "RfS"), as their natural language is supposed to be extremely complicated and thus they couldn't speak to us (in their scenario) that way.
( Collapse )
Bleh. Oh well; seems I can't stay away from Magnitogorsk even when I try.
What more did I do? On a similar vein, I tried making "letters" that would fit the "standardized and optimized communications method" air I was trying to convey. Not written by hand, but by computer. Here I got something to show for my day's work  I just chose permutations of {circle, triangle, rectangle}, three of each for a total of 3^3 = 27 glyphs. (See why I picked 27 above?)
It looks like this. Again, detail.. though this plays more importance in a variation of the scenario, where it's used for name tags of sorts.
And now I'm "drifting" again  that really odd sensation where it's as if I'm in front of my head. So this'll have to do for now :) Comment!
This is amusing
From a web page (don't remember which): "Ultimate computer science is not about what is probable but what is possible. The rest can be left to the engineers. " I don't think I can find a better quote to illustrate the "well, the rest is merely engineering" point of view of abstract science :)
Oh, and I wonder if there's an analogous law to Arrow's impossibility theorem for trust metricss. I have a hunch there is, though, because from one perspective, voting is just trust calculation where all voters are trusted equally.
(And I did write something about RfS today. Yay!)
Oh, and I wonder if there's an analogous law to Arrow's impossibility theorem for trust metricss. I have a hunch there is, though, because from one perspective, voting is just trust calculation where all voters are trusted equally.
(And I did write something about RfS today. Yay!)
(no subject)
Here's a random thought:
Would it be possible to use set covering to make efficient differential diagnoses?
You wouldn't need set covering if each person could only have one illness, but if they can have more at a time, you'd need it.
(Though all of this is hypothetical since, like with Mycin, if the program was wrong, who'd assume responsibility? Nobody would, so the program couldn't be used, even if it was perfect 999 times out of 1000.)
Would it be possible to use set covering to make efficient differential diagnoses?
You wouldn't need set covering if each person could only have one illness, but if they can have more at a time, you'd need it.
(Though all of this is hypothetical since, like with Mycin, if the program was wrong, who'd assume responsibility? Nobody would, so the program couldn't be used, even if it was perfect 999 times out of 1000.)