O Archduke (Capeo do Bundo)'s Journal [entries|friends|calendar]
O Archduke (Capeo do Bundo)

[ website | My personal homepage... ]
[ userinfo | livejournal userinfo ]
[ calendar | livejournal calendar ]

[10 Sep 2009|12:06am]
*pHEAR*

post comment

[01 Jun 2009|11:39pm]
This is pretty damned cool

http://www.wired.com/gadgetlab/2009/05/homebrewed-cpu/
post comment

[30 Apr 2009|02:15pm]
http://podcasts.military.com/2009/04/stephen-colbert-headed-for-an-undisclosed-location.html#comments

The USO is bringing Stephen Colbert to Iraq to do some shows. I really love the USO. They are a wonderful organization and I don't think there is a single service member who they haven't touched.
post comment

[11 Apr 2009|02:05pm]
I'm now en route to MI one week shy of three years in Upstate NY.
post comment

[06 Jan 2009|02:08am]
http://market-ticker.denninger.net/archives/618-Congress-What-Bernanke-and-Hank-Arent-Telling-You.html

and then...

http://market-ticker.denninger.net/archives/703-Uh-Oh.....-Monetary-Flat-Spin.html

eep!
post comment

[05 Dec 2008|05:10pm]
This is visible progress for today. I now have a wordpress plugin that talks to my webservice via a PHP proxy and gives bloggers some real writing help!

http://tinyurl.com/6db82f
post comment

[19 Nov 2008|11:24pm]
I should be doing homework right now. Its due tomorrow and I haven't read the chapter or started the assignment yet. Care to know why?

Well, I launched my side project today. http://www.feedbackarmy.com/

Feedback Army is a tool for web developers to easily get feedback from normal users. You visit the site, provide your URL, write up any questions you want answered, click submit, provide your credit card number, and voila! Feedback rolls in.

How does it work? I use a web service called Mechanical Turk facilitated by amazon.com. Through this service my programs can submit jobs to a large pool of workers. The workers complete the task and the result is made available to my program. Instant outsourcing and the workers are paid for their effort.

You can check out the jobs available at http://www.mturk.com/

I started the project last week Wednesday after seeing a thread on Hacker News asking whether there is a need for outsourced usability testing. http://news.ycombinator.com/item?id=362459

I started working on it right away (as evidenced in my quiz grade last week) and then I went to California for 5 days. Well 2 days travel, 3 days hanging out. Yesterday and today I busted my butt getting this site together. Today I was finally able to submit a job myself to get feedback on feedbackarmy.com.

I was really impressed with the feedback I received. The four comments I received were thoughtful and mostly useful. Certainly worth the money paid.

The problem I've encountered is people didn't bite fast enough. So I've upped the micropayment by $0.05 and I've removed the United States locale restriction. I'm curious to see what kind of response I get now. Especially since it is 11:30pm on a weeknight.

My goal is to consistently provide clients with 10 reviews in 24 hours. The job expires in 36 hours but I'd like to get the reviews sooner rather than later. We'll see how it works out with round 2. I'll know when I wake up tomorrow.

Mechanical Turk is *very* neat. I look forward to playing with it more in the future. This service is fun too. I achieved the full site including the mechanical turk portion, paypal integration, design and miscellaneous functionality in 3 working days. Not bad.
post comment

Product Idea [02 Nov 2008|01:05pm]
Ok, I just want to write this out so I don't forget it. I have a problem in the Sleep community. I'm sure other projects and/or businesses have this problem as well.

The information about what you can do with Sleep exists. The problem is it exists in so many damned places. Places that change and receive new information! There is the website, the documentation, the jroller blog, the mailing list, the articles I write, and even a few outside contributions. All well and good but how is someone going to find all this crap besides a hope and a prayer on google.

Now I do maintain links to all this stuff but I still feel its a poor way to keep users informed and I don't have the energy or time to turn every interesting mailing list thread into an article (even though there aren't that many). The JMX thread comes to mind though. I collaborated with a user to develop a JMX (Java Messaging) API for Sleep and it rocks. It only exists on the mailing list though.

So I thought to myself "What Sleep needs is a resource center that lets a user search and find all this stuff".

What would such a resource center do?

- It would subscribe to mailing lists, rss feeds, and allow the submission of third party links.
- It would use this information to become a repository of all things Sleep
- This repository will be searchable in 3 ways (will cover these in detail in a moment)
-- keyword search
-- tag search
-- likeness search

- Yes, in the resource center, any resource should have editable tags and/or a summary associated with it.

Now here is what I picture the flow of pages:

The initial page in the resource center is the search page. This page has a blank box. There will also be a tag cloud displayed as well.

Naturally the search page (or tag link) directs you to the results page. I'd like to show each result along with its title and associated tags. If there is room I'd like to display why it is in the result page (tag match: blah, phrase match: "this is the phrase I matched"). The beautiful thing is I have the natural language processing tools to do this.

A click on a result will move you to a resource information page. This page will contain:
- a screenshot of the resource (if a webpage). A liquid paper view of the resource (if a PDF or something else)
- The tags associated with it
- The summary associated with it
- The title taken from the resource
- Options to edit summary and tags in place
- Links to other documents like this resource (the likeness search)

I think its important the resource center becomes a place to find content and not to generate it. The resources themselves are available for this.

Once I solve this problem for myself. Maybe I can solve it for others. I propose doing this by creating a service folks can sign up. They can customize their resource center and run it as an embeddable javascript in their site. Simply include the URL and go. I'd have to deal with cross domain AJAX but it'd be a cool feature to make available.
post comment

[01 Nov 2008|03:45pm]
Small benefit of living downtown: next door there is a popular restaurant/bar called Dinosaur BBQ. Today (and recently on weekends) they have a blues musician playing in their outside area. I have my windows open and I'm able to enjoy the live music as if I was sitting down there. Pretty cool.
post comment

[14 Oct 2008|11:28pm]
Studying for an exam and stumbled on this definition in the new hacker's dictionary. It made me laugh:

recursion /n./

See recursion. See also tail recursion.
post comment

[09 Oct 2008|01:42am]
Ok, I'm a fan now. This guy is every Computer Scientist's dream. Too bad he doesn't have a video for his song about the singularity:

A song about Fibonacci numbers?



One about building killer robots.



My personal favorite so far:


post comment

[16 Sep 2008|11:56am]
I'm thinking of starting another open source project. All the other things I'm supposed to do be damned. Here is the deal. I want to apply my neural networks to filtering of different sources of data. To make a neural network really sing for you, it helps to divide up the data into "features" with values from 0.0 to 1.0. These features represent parts of the data (all of them with their values are a fingerprint of the data) and the higher the number, the stronger that features presence is compared to the others.

At first I used word tokens as features with a weight of 1.0 for each. As I wrote before my results were muddled. So the next thing I want to try is using semantic signatures as features. A semantic signature is a concept (pre-determined) and a weight. A document may have multiple such signatures with different weights. Sound familiar to something in the last paragraph?

Awhile back I looked at some semantic technologies like OpenCalais and Semantic Hacker (<-- from Rochester, NY!!!). OpenCalais is made by the Associated Press and doesn't seem to work well for non-news things I tried it on. Semantic Hacker looks kind of cool but I'm finding the web service will filter out faint values (even though those faint values might really matter in the document).

Here is what semantic hacker gives me for a craigslist job ad, again I had to strip the extreneous stuff to get this far. The results were way off before:

Business/Financial_Services/Venture_Capital                                                0.33455554
Business/Financial_Services/Venture_Capital/Development_Stage                              0.3252743
Computers/Programming/Languages/PHP/Tutorials/Collections                                  0.27718374
Business/Financial_Services/Venture_Capital/Regional/North_America                         0.25884017
Business/Financial_Services/Venture_Capital/Capital_Access_-_Financing                     0.22205356
Business/Financial_Services/Venture_Capital/Industry_Focus                                 0.21784493
Computers/Software/Shareware/Windows/Security/Parental_Control                             0.21767771
Computers/Programming/Resources/Directories                                                0.1886673
Business/Education_and_Training/Team_Building                                              0.18185069
Business/Opportunities/Valuation_Services                                                  0.1785658
Computers/Data_Formats/Markup_Languages/HTML/Tutorials                                     0.17329806
Computers/Computer_Science/Academic_Departments/North_America                              0.17213543
Computers/Education/Internet/Courses                                                       0.16943185
Computers/Data_Formats/Style_Sheets/CSS/FAQs,_Help,_and_Tutorials/Tutorials                0.1608434
Computers/Programming/Languages/JavaScript/Scripts/Collections                             0.15986392
Computers/Programming/Languages/JavaScript                                                 0.14919305
Computers/Data_Formats/Style_Sheets/CSS/FAQs,_Help,_and_Tutorials                          0.1461431
Games/Video_Games/Driving_and_Racing/Simulations/NASCAR_Games/NASCAR_Series                0.14173141
Computers/Open_Source/Employment                                                           0.13849033
Business/Financial_Services/Banking_Services/Banks_and_Institutions/Oceania/Australia      0.13599382
Computers/Programming/Internet/Directories                                                 0.13462016
Computers/Multimedia/Music_and_Audio/Audio_Formats/MP3/News_and_Media/MP3.com              0.13434145
Sports/Soccer/CONCACAF/Youth/Clubs_and_Teams/California                                    0.12773187
Computers/Data_Formats/Markup_Languages/HTML/Tutorials/Beginners                           0.12558177
Computers/Programming/Languages/PHP/Tools                                                  0.12442707
Computers/Programming/Resources                                                            0.12143286
Computers/Programming/Languages/Java/News_and_Media/Books                                  0.11869348
Business/Business_Services/Consulting/Marketplaces/Talent_Auctions                         0.11758658
Computers/Programming/Languages/Java/Resources                                             0.114691906
Computers/Education/Programming/FAQs,_Help,_and_Tutorials                                  0.11407475


Here is what semantic hacker gives me for a craigslist job ad looking for an iPhone developer. Note that I did strip the extreneous stuff.

Computers/Systems/Apple/Macintosh/Rumors                                                   0.7633384
Home/Consumer_Information/Electronics/Communications/Mobile                                0.24909219
Computers/Mobile_Computing/Wireless_Data                                                   0.23962286
Computers/Software/Operating_Systems/Mac_OS/Security                                       0.20100276
Computers/Mobile_Computing/Wireless_Data/Platforms                                         0.17536052
Business/Telecommunications/News_and_Media                                                 0.13682538
Business/Telecommunications/Services/Wireless/Consulting                                   0.13416077
Shopping/Consumer_Electronics/Communications/Wireless/Cellular_Phones                      0.12296195
Business/Energy_and_Environment/Utilities/Water/Equipment                                  0.11766544
Arts/Online_Writing/Fiction                                                                0.1112681
Shopping/Consumer_Electronics/Communications/Wireless                                      0.11012225
Business/Arts_and_Entertainment/Photography/Photographers/Commercial-Advertising/Europe    0.10860253
Computers/Mobile_Computing/News_and_Media                                                  0.1085717
Shopping/Consumer_Electronics/Communications/Wireless/Cellular_Phones/Accessories          0.108239375
Computers/Systems/Apple/Macintosh                                                          0.10344797
Computers/Mobile_Computing/Wireless_Data/WAP                                               0.095365375
Business/Telecommunications/Equipment/Telephones/Wireless_Phones/Accessories               0.08911421
Computers/Mobile_Computing/Wireless_Data/Resources                                         0.08825931
Computers/Mobile_Computing/Wireless_Data/3G                                                0.08814554
Computers/Mobile_Computing/Wireless_Data/Short_Messaging_Service                           0.087946974
Computers/Internet/Chat/Instant_Messaging                                                  0.08785649
Computers/Systems/Apple/Macintosh/Development                                              0.07982738
Computers/Data_Communications/Wireless                                                     0.07615996
Computers/Systems/Apple/Macintosh/Development/Mac_OS_X                                     0.0747718
Business/Telecommunications/Services/Wireless/News_and_Media                               0.073287174
Business/Telecommunications/Two-Way_Radio/Equipment/Dealers/North_America                  0.0679186
Computers/Programming/Languages/Java/Resources/Micro                                       0.061543915
Business/Telecommunications/Two-Way_Radio                                                  0.06035967
Business/Employment/Job_Search/Resume_Advice                                               0.0603559
Computers/Software/Operating_Systems/Graphic_Subsystems/Toolkits/Qt                        0.06027392


Pretty useful when the ad has more detail. But this got me to thinking. I like what they are doing but I'd like to tweak this and make it more up to date. For example their categories don't seem to mention the iPhone. How could I come up with a list of categories like they have, match free text to them, and generate the weights?

Then I had the thought of starting an open-source semantic signature extraction project. I haven't worked out all the details yet. But I do think the pieces are out there.

First off, I'd use a database like wordnet to label and filter out noisy words (the, and, or, etc.). This will save some space in the dataset and save on processing time.

I'd then like to use the directories from dmoz.org (the open directory project) as the semantic signatures. The contents of each category will be the corpus of what fits into that signature. Crawling pages for a better corpus is an option too but I would try without that first.

Now comes the tough part, how to do the matching. I think building up a stored vector of keywords (after extraneous phrase removal) will work. When I wrote kindling I used pearson's correlation coefficient to compare two vectors. I was able to cache a lot of the operations on the vectors (i.e. their sums) to make actual comparison fast.

I think some clever hackery could be done to make sure only checking a subset of all the semantic concepts is necessary when processing a new document. I'm thinking maybe using k-means to process the database into clusters and processing a new document against the closes 2-3 clusters only.

Another option is to just ignore any signatures that don't have any words in common with the document to process.

These are some thoughts. I think it'd be a fun project. Certainly useful for my nefarious purposes.
post comment

[10 Sep 2008|11:02am]
Friends are awesome. Theladders.com mentioned polishmywriting.com in their newsletter. Here is a good chance to see how well my server holds up. Not sure when the load will spike (or if it already did) but so far the server is holding up well. Quite happy about that. :) Very cool.
post comment

[08 Sep 2008|02:42pm]
So I'm playing around with an old project of mine. A few years ago I started to work on an SDL version of Walter Bright's (of D fame) Empire: Wargame of the Century. Supposedly it was the first computer strategy game, created in the 70s.

I wrote up all the game logic, wrote a GUI toolkit on top of SDL, and did a really basic GUI for the game. I even got so far as a server and allowing multiple clients. However I still have to make the server transfer state, add a few commands, and support a few UI modes. Making the game map smarter about dirty regions would help too (the toolkit itself is smart about dirty regions but the map renderer is its own component and isn't very smart).

All said, its not actually that bad of C code. In some places I could pay more attention to my string bounds. Other than that its well modularized and the Empire Toolkit really allowed me to create a really live feeling UI. I used function pointers like crazy but it really helped organize it.

I'm tempted to just put what I have up on my homepage but wouldn't expect anyone to pick it up until it is playable in some fashion.

Neat project though.
post comment

THE EXPRESS - Trailer [05 Sep 2008|03:35pm]
post comment

[02 Sep 2008|01:24am]
Made a lot of progress on my neural network code today. I should be doing homework but screw homework. I managed to teach my network XOR. XOR seems to be the hello world of the neural network world. The reason is because a structure can't learn XOR with two inputs, one perceptron, and an output. A perceptron is a thing that weighs some inputs and has a weight associated with each potential output.

$ java -jar /Users/raffi/sleepdev/sleep/sleep.jar xornn.sl           
0 xor 0: %(result => 1.8428326860431534E-4)
0 xor 1: %(result => 0.9908915659480854)
1 xor 0: %(result => 0.9908915659480854)
1 xor 1: %(result => -1.0244302220076359E-4)


If you read that output closely enough, you'll notice its correct, the values round to 0, 1, 1, and 0.

My real goal is to classify entries in RSS feeds. I'm having some trouble getting that working. Here is what I have so far. I'm finding that my classifier things everything sucks and things that rule, rule, and things that suck still rule but they don't rule as much as things that rule. I'm thinking of just making my code use the lowest rules value of all the files that rule and ignore everything else. It won't filter everything that sucks but at the same time it makes a reasonable division point.

$ java -jar sleep.jar test_classifier.sl 
/Users/raffi/neural/train/10_sucks.txt : %(rules => -0.5669416408288851, sucks => 0.9758401349480642)
/Users/raffi/neural/train/11_rules.txt : %(rules => 0.7793401962455176, sucks => 0.9594194603551056)
/Users/raffi/neural/train/12_sucks.txt : %(rules => 0.2698011308417212, sucks => 0.9989213367887784)
/Users/raffi/neural/train/13_rules.txt : %(rules => 0.9714276174845673, sucks => 0.9890561250009321)
/Users/raffi/neural/train/14_sucks.txt : %(rules => 0.6626070968061633, sucks => 0.999797584489247)
/Users/raffi/neural/train/15_sucks.txt : %(rules => 0.607661582382322, sucks => 0.999173845673098)
/Users/raffi/neural/train/16_sucks.txt : %(rules => 0.8495322637891594, sucks => 0.9990712740153843)
/Users/raffi/neural/train/17_rules.txt : %(rules => 0.9657097842373441, sucks => 0.9982580129505595)
/Users/raffi/neural/train/18_rules.txt : %(rules => 0.8818367883847134, sucks => 0.9991242685303593)
/Users/raffi/neural/train/19_rules.txt : %(rules => 0.9199167479450976, sucks => 0.9992598305024123)
/Users/raffi/neural/train/1_sucks.txt : %(rules => 0.8953799772431881, sucks => 0.9993113773863517)
/Users/raffi/neural/train/20_rules.txt : %(rules => 0.9097810200161218, sucks => 0.9986571196661748)
/Users/raffi/neural/train/2_rules.txt : %(rules => 0.9099343630927776, sucks => 0.9969689145981467)
/Users/raffi/neural/train/3_sucks.txt : %(rules => 0.8527037368026661, sucks => 0.9991352009823448)
/Users/raffi/neural/train/4_sucks.txt : %(rules => 0.7349048978615093, sucks => 0.999859096813427)
/Users/raffi/neural/train/5_rules.txt : %(rules => 0.9259989835819108, sucks => 0.9996166656772045)
/Users/raffi/neural/train/6_sucks.txt : %(rules => 0.7551007965636816, sucks => 0.9995741989151167)
/Users/raffi/neural/train/7_rules.txt : %(rules => 0.8382024372352468, sucks => 0.9996144469194209)
/Users/raffi/neural/train/8_sucks.txt : %(rules => 0.8654390548332048, sucks => 0.999615041254279)
/Users/raffi/neural/train/9_sucks.txt : %(rules => 0.9375595025386569, sucks => 0.999619102291758)


Of course I trained on everything in this dataset. Later I will have to get some data the network hasn't seen, decide wether I think it sucks or rules, and then see what my algorithm tells me. If the lowest rules score works out for the most part then I will go that route.

The thing I am most pleased with is the API for using the neural network is pretty simple. As it should be. Here is the code for training XOR:

$network = newObject("nn", @("result"));

for ($x = 0; $x < 1000; $x++)
{
   [$network trainquery: @(0, 0), %(result => 0.0)];
   [$network trainquery: @(0, 1), %(result => 1.0)];
   [$network trainquery: @(1, 0), %(result => 1.0)];
   [$network trainquery: @(1, 1), %(result => 0.0)];
}

println("0 xor 0: " . [$network getresult: @(0, 0)]);
println("0 xor 1: " . [$network getresult: @(0, 1)]);
println("1 xor 0: " . [$network getresult: @(1, 0)]);
println("1 xor 1: " . [$network getresult: @(1, 1)]);


In theory I should be able to approximate any mathematical function with enough training data. The only problem is I don't know how to normalize the input and output for said function. Probably has something to do with mapping my input to a sigmoid function like tanh and reconstituting it from said function. Not something I know how to do off the top of my head.

Oh well, tomorrow and onwards I'll focus on my school homework. This has been a fun exercise.
post comment

[29 Aug 2008|12:19pm]
For fun i did do a backtrack implementation in Sleep yesterday. So now I can write prolog type stuff:

Here is the code for backtrack. This function accepts an array of arrays as the argument. Each position of the top level array represents a free variable that I want to bind something to. And each array held at each position represents the domain of all possible values I can bind to that free variable. Using coroutines (to promote lazy generation of bound values) backtrack assigns all possible combinations of the values to each free position and returns a potential result.

sub backtrack
{
   local('$var $next $slist');

   if (size($1) > 1)
   {
      $slist = sublist($1, 1);
      foreach $var ($1[0])
      {
         while $next ([$this: $slist])
         {
            yield concat($var, $next);
         } 
      }
   }
   else
   {
      foreach $var ($1[0])
      {
         yield $var;
      }
   }
}


Here is the SEND + MORE = MONEY puzzle written in Sleep using this.



@digits = @(0, 2, 3, 4, 5, 6, 7, 8, 9);

sub isAnswer
{
   local('$S $E $N $D $M $O $R $Y');
   ($S, $E, $N, $D, $M, $O, $R, $Y) = $1;

   if ( (int("$S$E$N$D") + int("$M$O$R$E")) == int("$M$O$N$E$Y"))
   {
      # make sure there are no duplicates
      if (size(putAll(%(), $1, { return 1; })) == 8)
      {
         return 1;
      }
   }
}
                           #  S        E        N        D        M     O        R        Y
while $potential (backtrack(@(@digits, @digits, @digits, @digits, @(1), @digits, @digits, @digits)))
{
   if (isAnswer($potential))
   {
      local('$S $E $N $D $M $O $R $Y');
      ($S, $E, $N, $D, $M, $O, $R, $Y) = $potential;

      println("   $S$E$N$D");
      println(" + $M$O$R$E");
      println("-----------");
      println(" $M$O$N$E$Y");
      return;
   }
}


Not the fastest thing in the world. It takes about 455 seconds for Sleep to solve this puzzle. I think it has to analyze over 4.5 million solutions to do this. Oh well, that is what I get with an interpreted language sitting on top of a virtual machine.



Here is another problem... finding integer solutions for a circle of radius 5:



global('@digits $solution $X $Y');

@digits = @(-5, -4, -3, -2, -1, 0, 1,  2, 3, 4, 5);

while $solution (backtrack(@(@digits, @digits)))
{
   ($X, $Y)  = $solution;

   if ((($X * $X) + ($Y * $Y)) == 25)
   {
      println("$X $+ , $Y");
   }
}


I became inspired to play with all this when I saw some examples of erlang's list comprehension in action. Its pretty neat how they enable a mechanism for logic programming. I've thought about adding something like it to Sleep. Fortunately Sleep does have yield and that does enable the backtracking. So maybe not :)



http://en.wikibooks.org/wiki/Erlang_Programming/List_Comprehensions
post comment

[27 Aug 2008|02:40pm]
I can so relate to this:

"Over the years I’ve met a lot of smart people and I’ve invited them to tell me what they think. For years people did not “see it” and that exacted a toll on my confidence. Doubt is born out of fatigue and loneliness, and there is a lot of both when you are running a start up. Hang in there and keep your feet moving - there’s still a lot of time for you to change the world."

http://www.freshbooks.com/blog/2008/08/27/7-ways-ive-almost-killed-freshbooks/
post comment

[26 Aug 2008|10:16pm]
They love it, they really love it.

"This is a great resource!

I've run a couple of draft essays through it, and it has helped speed up my revision quite a bit.

Just FYI, about 50% of the suggestions are useful. It's particularly good at picking up the passive voice. I find the word suggestions less useful. They work okay on relatively unpolished bits of my writing, but are mostly superfluous on the more polished bits. The reason, I guess, is that in the more polished bits the word choice is already quite careful.

I'm spreading word around on my blog, and on FriendFeed."

AND...

"Checks your writing against more than 7000 rules of plain language. I put a couple of draft essays through it, and found about half the suggestions helpful, which is a pretty good batting average."

AND...

"I really like it. An "Ignore rule" button would be nice. If you're trying to demonstrate a strong vocabulary in your paper, you might not ever want simpler words. (I chose that rule in particular because you seem to have made that rule too aggressive. A passage of solely simple words, while possibly more clear, is not as pleasant to read.)

Your suggestions don't always actually fit, but that's true of certain large, expensive pieces of software (Microsoft Word) too.

Also, allowing people to upload files (at least text files, maybe rtf and doc if you can parse them into text) might be nice so they don't have to select and paste it all.

Overall it looks like a very useful site though."

AND...

"Without spending the time to test out the algorithm, I can tell you this is sorely needed in the world of blogging. I'm appalled by the atrocious grammar often found at places like TechCrunch. I can't imagine how the big blogs don't all outsource copy-editing.

Perhaps an awesome feature would be the ability for them to use your site and then have the copy-edited stuff published directly to the blog, maybe via xml-rpc or something."

Now that I've spread the word around more than my anti-writing friends, it turns out people like polishmywriting.com. I'm glad to hear this. I put a lot of work into it in June and gave up when I received 0 feedback.

I mean I know I can't just put something out there and expect the world to beat a path to my door. At the same time I expected someone, anyone, in the circle of folks I know to say "this is useful to me" but that never came. And so I gave up on it.

I'm going to work on polishmywriting again. Here are some of the things I want to do (sooner rather than later):

- convert the rule engine into a webservice... this will allow moconti to scale to far more users, right now the rule engine is not being garbage collected, I don't yet know why but will figure it out. This causes my server to die relatively quickly. I don't have this problem with the other sites hosted by my app engine. For example, when jIRCii was released, lots and lots of hits... people love jIRCii... but no memory leak so the server held up fine.

- make a facebook app... should be simple, I have all the pieces in place... I was just waiting on the facebook style sheet (which I did find eventually).

Just in time for the back to school rush. And now that I'll be working again ($$$) I may even invest the $600 into a press release service to get the word out to college newspapers and the national media.

I put a lot of effort into polishmywriting and became very discouraged by the lack of feedback. Part of the downward spiral that led me to seek a job again. We'll see what happens. Maybe it is a winner idea that suffered due to my not marketing it.
post comment

[25 Aug 2008|09:04pm]
I received 4 script submissions for jIRCii this month with more in development. Looks like the jIRCii community is kicking some ass.

This is how I judge a successful open source project.

For a visual tour of what I'm talking about, scroll to the bottom of the screenshots page, this is hacker art at its best:

http://jircii.dashnine.org/screenshots.html
post comment

navigation
[ viewing | most recent entries ]
[ go | earlier ]