Advertisement

A new sort of development environment

Started by October 04, 2010 08:44 AM
43 comments, last by MoundS 14 years, 4 months ago
Quote:
Original post by Antheus
If this were a hosted service, consider the following:
#include </dev/random>


The file system doesn't exist, so /dev/random doesn't exist. This is another reason why C++ just plain won't work, the language is designed around files. No files = no includes. Even if an analogue to /dev/random were created, it's not unreasonable to expect that the compiler could be configured to avoid such malarky.
Quote:

At which point the question becomes, how much would it cost per cycle/hour/something.
Or how much it would require to host locally.

I'm not thinking of this as a monetized system, but I don't see any reason why it shouldn't cost the same as hosting any other application of similar size and processing complexity. Once developed, it's going to have a certain code base size, a certain DB size that will grow over time, and bandwidth and processing time will be measurable.

[Formerly "capn_midnight". See some of my projects. Find me on twitter tumblr G+ Github.]

Quote:
Original post by capn_midnight
The file system doesn't exist, so /dev/random doesn't exist.
Which means it's impossible to develop anything. You can't even start an application, since main() arguments don't exist.

Quote:
This is another reason why C++ just plain won't work, the language is designed around files. No files = no includes.
Unless it works with existing mainstream languages, it cannot work. There is no way to call existing platform APIs. You can't even implement malloc, since it's defined in platform API.

Another thing - even in managed languages, C must be supported at minimum. Otherwise one cannot include even GL bindings, or MySQL API. And that makes it a complete showstopper.

As for unit testing - there need to be files - the distributed application will need to read them - from user documents, to registry to config.

Quote:
Even if an analogue to /dev/random were created, it's not unreasonable to expect that the compiler could be configured to avoid such malarky.
Many graph algorithms have worse than n^2 complexity, and many cannot be distributed effectively. This type of corner cases are vary common. One notorious example is recursive make complexity.

All existing problems remain - it comes down how this alternative will solve them better.

Quote:
I'm not thinking of this as a monetized system, but I don't see any reason why it shouldn't cost the same as hosting any other application of similar size and processing complexity.
The complexity of PHP and similar hosting platforms is O(1). And if some quota is exceeded, the service is shutdown. But imagine changing a core dependency, which may require updating gigabytes of data (raw videos, sound, 4096x4096 images).

Quote:
Once developed, it's going to have a certain code base size, a certain DB size that will grow over time, and bandwidth and processing time will be measurable.

DB is still stored on disk, so all the bottlenecks of existing file systems remain. Existing SQL-centric databases don't cater to well to such use models. Graph databases exist, but are not necessarily mature enough.

I'm not saying there is anything at all wrong with the idea, but those are some very real issues one must deal with (@why wasn't it done before).

But one thing is absolutely certain - unless it works out-of-box with existing projects using existing languages (such as checking something out of SF or github), then it simply isn't viable.
Advertisement
Quote:
Original post by Antheus
Quote:
Original post by capn_midnight
The file system doesn't exist, so /dev/random doesn't exist.
Which means it's impossible to develop anything. You can't even start an application, since main() arguments don't exist.

Quote:
This is another reason why C++ just plain won't work, the language is designed around files. No files = no includes.
Unless it works with existing mainstream languages, it cannot work. There is no way to call existing platform APIs. You can't even implement malloc, since it's defined in platform API.

Another thing - even in managed languages, C must be supported at minimum. Otherwise one cannot include even GL bindings, or MySQL API. And that makes it a complete showstopper.

As for unit testing - there need to be files - the distributed application will need to read them - from user documents, to registry to config.

Quote:
Even if an analogue to /dev/random were created, it's not unreasonable to expect that the compiler could be configured to avoid such malarky.
Many graph algorithms have worse than n^2 complexity, and many cannot be distributed effectively. This type of corner cases are vary common. One notorious example is recursive make complexity.

All existing problems remain - it comes down how this alternative will solve them better.

Quote:
I'm not thinking of this as a monetized system, but I don't see any reason why it shouldn't cost the same as hosting any other application of similar size and processing complexity.
The complexity of PHP and similar hosting platforms is O(1). And if some quota is exceeded, the service is shutdown. But imagine changing a core dependency, which may require updating gigabytes of data (raw videos, sound, 4096x4096 images).

Quote:
Once developed, it's going to have a certain code base size, a certain DB size that will grow over time, and bandwidth and processing time will be measurable.

DB is still stored on disk, so all the bottlenecks of existing file systems remain. Existing SQL-centric databases don't cater to well to such use models. Graph databases exist, but are not necessarily mature enough.

I'm not saying there is anything at all wrong with the idea, but those are some very real issues one must deal with (@why wasn't it done before).

But one thing is absolutely certain - unless it works out-of-box with existing projects using existing languages (such as checking something out of SF or github), then it simply isn't viable.

First, assembly dependencies could be stored in the database and referenced therein. There is nothing special about file systems that says user-land application programmers need to know anything about them. Yes, the db gets stored on disk; that is the DBMS and OSs responsibility. You forget that a filesystem is just an abstraction of a disk anyway. And you're thinking too C-centric, exceedingly few people use C or C++ for a legitimate reason.

Second, these are all largely Bike Shed Problems. This database doesn't even exist yet and you're worried about how much cpu time it's going to take?

[Formerly "capn_midnight". See some of my projects. Find me on twitter tumblr G+ Github.]

I don't get it.

I mean, I understand the parse tree storage (is that different from Intellisense/all other similar functions?), but that doesn't seem new. You could keep your asset/source/update/etc information in a database with that information already -- using many existing version control systems.

How are you ditching the "file system?" Those "files" still exist SOMEWHERE. You still have to reference them somehow. Say someone writes a class that will load, parse, and play music files: How do I use that in my code? #include "jbob/music" ? use "music by jbob" ? I suppose you could automate grabbing the necessary sources from your parse tree DB, but is that really much of an improvement? How would you handle conflicting/redundant naming? I can't be the only person who's had to use several different IManagers in a single project. Wouldn't you end up doing things like jbobs::SoundManager vs local::SoundManager?

AFAIK real-time builds are restricted by server availability and dependency/asset matching and good structures automate tests now.

Basically, I don't get it. What does this environment help me do better?
Quote:
Original post by jolid
I don't get it.

I mean, I understand the parse tree storage (is that different from Intellisense/all other similar functions?), but that doesn't seem new. You could keep your asset/source/update/etc information in a database with that information already -- using many existing version control systems.

How are you ditching the "file system?" Those "files" still exist SOMEWHERE. You still have to reference them somehow. Say someone writes a class that will load, parse, and play music files: How do I use that in my code? #include "jbob/music" ? use "music by jbob" ? I suppose you could automate grabbing the necessary sources from your parse tree DB, but is that really much of an improvement? How would you handle conflicting/redundant naming? I can't be the only person who's had to use several different IManagers in a single project. Wouldn't you end up doing things like jbobs::SoundManager vs local::SoundManager?

AFAIK real-time builds are restricted by server availability and dependency/asset matching and good structures automate tests now.

Basically, I don't get it. What does this environment help me do better?


They exist as blobs in the db. So the C preprocessor #include as it is currently designed doesn't work in a file-less system. #include as it is currently designed is broken anyway. C# and Python and Scheme get along fine with module import systems that don't allow for recursive references. That's how you solve the problem, you just don't let it *be* a problem. And yet they can all still interoperate with C libraries in some form.

But as I said, the heretos and whyfors of how things are stored in the database are less important than the access-anywhere aspect of the tool chain, the implicit revision control *on everything*, and the referencing of various assets to one another. It's a system meant for building a project management methodology, it's not about performance.

Maybe I need to make that clear, I don't really care about performance. I've seen people criticize code that was heavy in reflection because of the extra 15 milliseconds it added... to a 5 second long database query. I care about planning and executing projects. My requirements are tied up in closed, binary document formats and my unit tests are written in code. They are completely, 100% dependent on each other, and yet there is no hard link between them. I basically have to repeat myself every time I want a test to harken back to a requirement or a requirement to mention a test. If one changes, there is no way to tell from the other. This is why people don't write proper documentation, because it takes too much time to constantly repeat yourself everywhere.

So the idea is to make a system that puts everything, user accounts for programmers, documentation, timelines, estimates, code sets, in one place. The database is a natural place for this, because it's the relations I care about. Along the way, why don't we see what we can do about getting code out of "files" and into a format that is more readily amenable to the database? That's the only point of storing the parse tree in the DB, we have the chance to do it and it could possibly make our referentialism more expressive.

[Formerly "capn_midnight". See some of my projects. Find me on twitter tumblr G+ Github.]

Quote:
Original post by Antheus
I'm not saying there is anything at all wrong with the idea, but those are some very real issues one must deal with (@why wasn't it done before).

But one thing is absolutely certain - unless it works out-of-box with existing projects using existing languages (such as checking something out of SF or github), then it simply isn't viable.


I dont think such a system is going to be mature enough anytime soon to be used in any 'real world' setting anyway, where money is at stake.

Developing the concept around one existing language well suited to the idea (python?) and working out the kinks by means of hobby projects seems like a necessary first step; unless you can convince microsoft to push it in their next generation .NET or something.
Advertisement
Another thing that just occurred to me: if you want your storage to be in the form of a tree rather than plaintext, that essentially forces you to enforce these linguistic rules in the editor as well; what if you need to leave suddenly, but you are missing a closing parenthesis somewhere? If it doesnt parse, it doesnt save. If all scopes are objects in a tree on your screen anyway, there is no such thing as an unclosed scope. Which I wont miss for a second, by the way.
Quote:
Original post by Eelco
Another thing that just occurred to me: if you want your storage to be in the form of a tree rather than plaintext, that essentially forces you to enforce these linguistic rules in the editor as well; what if you need to leave suddenly, but you are missing a closing parenthesis somewhere? If it doesnt parse, it doesnt save. If all scopes are objects in a tree on your screen anyway, there is no such thing as an unclosed scope. Which I wont miss for a second, by the way.


Yeah, when you type curly braces, you so rarely care about the actual characters themselves, you care about the change of scope that they imply. This is certainly an editor issue.

[Formerly "capn_midnight". See some of my projects. Find me on twitter tumblr G+ Github.]

After some searching, it would seem all 'visual programming languages' are some form of hubristic attempt at redefining programming using flowcharts.

All id like to have is strongly structured code; aside from more explicitly visualizing the tree structure than is customary in most IDEs today, using the most explicit settings, id want it to read just like plain-old code.

I could imagine stacking scopes horizontally to visually represent pieces of code that can run in parallel, but thats as fancy as id like it to get. I personally abhor most of the ideas that go under the nomer of 'visual programming'. I want to be able to see the relation between my code and what it compiles to.

Doesnt seem like such a thing exists, and although in principle intellisense could be much faster and elegantly implemented in such a system, creating a two way integration between existing tools expecting unparsed input and such a representation seems like a lot of work. That said, the one way conversion is trivially implemented, and while automating the task of mapping compiler messages referencing the deparsed document to the tree seems like a lot of work, doing so manually shouldnt be a huge hassle.
Visual Studio used to have a thing called the Class Wizard that basically did that for MFC.

It was a terrible mess.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

This topic is closed to new replies.

Advertisement