PDA

View Full Version : Objective-C programming



aostling
04-05-2009, 11:55 AM
This article in the NYT describes making a good living from home, if you can program a game for the iPhone: http://www.nytimes.com/2009/04/05/fashion/05iphone.html?_r=1&scp=2&sq=iPhone&st=cse. I read that with interest, because I thought of a game many years ago, a variant on the Rubik's Cube. It's based on a different "crystallography" but has same general idea. It was too complicated for me to machine a prototype in 1982, but it could work as a virtual puzzle on an LCD screen.

That imaginary road to riches seems as distant as ever, though, since these iPhone games are programmed in Objective-C. Learning this could be really tough, especially for somebody like myself with no experience in object-oriented languages.

What is your experience with this? Is the leap from simple languages like Fortran or Basic to object-oriented programming as difficult as I expect it to be?

corbin
04-05-2009, 12:01 PM
This article in the NYT describes making a good living from home, if you can program a game for the iPhone: http://www.nytimes.com/2009/04/05/fashion/05iphone.html?_r=1&scp=2&sq=iPhone&st=cse. I read that with interest, because I thought of a game many years ago, a variant on the Rubik's Cube. It's based on a different "crystallography" but has same general idea. It was too complicated for me to machine a prototype in 1982, but it could work as a virtual puzzle on an LCD screen.

That imaginary road to riches seems as distant as ever, though, since these iPhone games are programmed in Objective-C. Learning this could be really tough, especially for somebody like myself with no experience in object-oriented languages.

What is your experience with this? Is the leap from simple languages like Fortran or Basic to object-oriented programming as difficult as I expect it to be?

Hi Allan,
I'm a developer who worked on the iPhone 1.0 and 2.0 software -- including creating those objective-c APIs that you would look to be using.

Fortran and basic are procedural languages, and it is quite a step to move from them to an object oriented language. But, of all the languages to learn, obj-c is basically just a step above C, with not too many fancy things added to the language (unlike C++ or Java).

Downloading the SDK is free --- as long you have a mac, it doesn't hurt to sign up for ADC, download the SDK, and start compiling the examples and running through tutorials to figure out how to do stuff.

corbin

(Beginner machinist, but advanced programmer -- I work on Cocoa at Apple)

aostling
04-05-2009, 12:07 PM
Downloading the SDK is free --- as long you have a mac, it doesn't hurt to sign up for ADC, download the SDK, and start compiling the examples and running through tutorials to figure out how to do stuff.
corbin


Corbin,

Thanks, this is encouraging. I do have a Mac, so I'll have a look at this. I may even decide to buy an iPhone, to get some exposure to this environment.

Evan
04-05-2009, 12:20 PM
What is your experience with this? Is the leap from simple languages like Fortran or Basic to object-oriented programming as difficult as I expect it to be?


Yes. I found it difficult to grasp the idea that essentially everything is event driven and asynchronous. Tasks ("methods") run in their own time slice and when they finish they cough up the result. Your main loop essentially sits around waiting for stuff to become available and then sends it off to where ever it belongs. I would start perhaps with Visual Basic as the code itself is familiar, mostly, but it is a fully object oriented language and really doesn't owe much to BASIC of the old days. There are quite a few changes in key words and the syntax is a lot different.

For instance, this code fragment resets some variables to a known state. But, to make sure that they are really reset before execution continues I issue a command for the interpreter to sychronize all pending events by polling everything for finished tasks. That's the "Application.DoEvents" method call at the end of the routine.



'RESET
Private Sub Button4_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles ResetButton.Click
halt = 1
flag = False
UD1.Value = 1
ud1shadow = 1
ud2.Value = 1
pending = 1
exposed = 0
totaltimeLed.Text = "00:00"
frames = 0
closecom()
RunButton.Enabled = True
Application.DoEvents()
End Sub

lazlo
04-05-2009, 12:21 PM
Alan, Objective-C is a predecessor to C++ -- it's a blend of C and Smalltalk.
Most of it is straight C, with embedded Smalltalk. If you're comfortable with C and C++, Objective-C is pretty straightforward:

[anObject aMethod] in Objective-C is the same as anObject->aMethod() in C++.

I looked into developing some machinist apps for the iPhone (speeds and feeds, decimal equivalence tables, tap and drill chart, press and interference fits table, ....) awhile ago: you need to have a Mac obviously, for the Objective-C environment, and you need to buy the $99 iPhone SDK from Apple.

If you're interested, Apple and Stanford are posting their 10-week iPhone developer's course as online course-ware:

http://blog.wired.com/gadgets/2009/04/apple-stanford.html

Evan
04-05-2009, 12:28 PM
Robert,

I am building a new computer and have been out of the loop for a while. I want your Intel-centric opinion on what is the best CPU out there for severe number crunching on stuff like CAD and Raytracing programs. The ray tracer I use in particular can make use of any number of cores or even distributed machines. I don't play games any more so I really don't care about that.

In the past the Athlon FPU ran circles around the Pentium FPU at a rate of about 2 to 1 according to many benchmarks on the various distributed computing projects. Has that changed? If so, how?

barts
04-05-2009, 02:26 PM
Robert,

I am building a new computer and have been out of the loop for a while. I want your Intel-centric opinion on what is the best CPU out there for severe number crunching on stuff like CAD and Raytracing programs. The ray tracer I use in particular can make use of any number of cores or even distributed machines. I don't play games any more so I really don't care about that.

In the past the Athlon FPU ran circles around the Pentium FPU at a rate of about 2 to 1 according to many benchmarks on the various distributed computing projects. Has that changed? If so, how?

The best FPU out there for ray tracing is prob. the Nvidia GPUs w/ CUDA; this gives a pretty amazing number of compute engines for low $$$.

http://www.nvidia.com/object/cuda_home.html#

If you're looking at doing number crunching on a traditional single socket CPU, I'd use an Intel i7 for maximum performance at this point.

http://arstechnica.com/hardware/news/2009/04/nehalem-xeons-touchdown-could-sweep-current-market.ars

The i7s are very fast indeed.

Oblig. disclaimer: I do kernel performance work at Sun Microsystems, where we build machines w/ AMD, Intel, and various SPARC cpus.

- Bart

dp
04-05-2009, 02:31 PM
With the tools available today programming in C++ is very much like programming in Visual Basic. A lot of it is just drag/drop. It's not like it was when I started writing C code using the DOS edlin editor. Especially easier than entering object code with paddle switches on the first computers I had access to. The very first computer I programmed belonged to Farmer's Insurance in Berkeley and used jumper wires and a screw driver.

Evan
04-05-2009, 04:30 PM
The best FPU out there for ray tracing is prob. the Nvidia GPUs w/ CUDA; this gives a pretty amazing number of compute engines for low $$$.


I don't mean ray tracing in realtime for display purposes. I am talking about crunching an image for later display. Some of these images take hours to compute, some even days. The ray tracer is POV-Ray and it supports true physical modeling of most optical phenomena. Some are enormously computationally intensive such as atmospheric scattering in any of several models such as Raleigh, Mie-Murky and Henyey-Greenstein.

Is it possible for the OS software to use the Nvidia FPU in that manner, similar to the cell processor on the PlayStation?

J Tiers
04-05-2009, 09:39 PM
With the tools available today programming in C++ is very much like programming in Visual Basic. A lot of it is just drag/drop. It's not like it was when I started writing C code using the DOS edlin editor. Especially easier than entering object code with paddle switches on the first computers I had access to. The very first computer I programmed belonged to Farmer's Insurance in Berkeley and used jumper wires and a screw driver.

hee-hee..... putting in the boot program with toggles...... THAT takes me back a ways...... like 4k of core......

I no longer recall how the Bendix G-15 booted, that was the first one I used.

lazlo
04-05-2009, 09:47 PM
Hi Evan,

I'm probably a bit biased since I architected some of the of multimedia (SSE4 instructions), but Nehalem, A.K.A. "Core i7" is a totally new microarchitecture and IMHO the best processor that Intel has built since P6 (Pentium Pro).

Nehalem's IPC (instructions per clock) is a big leap over Penryn (Core 2), and Penryn has a big advantage in IPC over Phenom. Each Nehalem core has a full-width (128-bit) floating point unit, and the new SSE4 Multimedia instructions exploit that hardware. A 3-channel integrated memory controller substantially reduces memory latency, and Nehalem has the new CSI ("QPI") point-to-point FSB.


The ray tracer I use in particular can make use of any number of cores or even distributed machines.

Nehalem is a quad-core, with each core having 2 hardware threads == 8 logical processors. So each Nehalem core is an 8-processor SMP (symmetric multiprocessor). If your raytracer is multithreaded, and especially if it's optimized for SSE 2/3/4, it will be very, very happy on Nehalem.

Some random benchmarks from Tom's Hardware (Anandtech has similar results). Nehalem has a huge performance margin over its contemporaries:

http://www.tomshardware.com/charts/desktop-cpu-charts-q3-2008/3DMark-Vantage-CPU,817.html
http://www.tomshardware.com/charts/desktop-cpu-charts-q3-2008/Lame-MP3,828.html
http://www.tomshardware.com/charts/desktop-cpu-charts-q3-2008/Blu-ray-HD-Video-Playback,834.html
http://www.tomshardware.com/charts/desktop-cpu-charts-q3-2008/Photoshop-CS-3,826.html

You can get a quad-core Nehalem for around $280 at Newegg et al:

http://www.newegg.com/Product/Product.aspx?Item=N82E16819115202

Evan
04-05-2009, 11:01 PM
Thanks Robert. POV-Ray is currently at version 3.6 with 3.7 sheduled soon. 3.7 is fully multithreaded and so will take advantage of whatever is available.

I will look at the prices here on Tiger Direct and see what they have and if I can afford it. I'm on a budget but computer parts are really cheap right now. What's the next best in your opinion?

Evan
04-05-2009, 11:07 PM
I no longer recall how the Bendix G-15 booted, that was the first one I used.


It was the first machine I used as well. It booted, if you can call it that, with a hardware instruction decoder that allowed it to read the rotating drum memory and load up a very simple IO routine for the teletype. There wasn't much to boot as it came up with no software at all, just an octal input "editor with no editing capabability to directly input cpu instructions, of which there were only about 20 or so. It had only one branch instruction, branch if zero.

Roy Andrews
04-05-2009, 11:37 PM
never went the programmer route only had to do enough to trouble shoot. but first cpu i worked on had heated core memory and we entered the bootup with pushbuttons which started a teletype punch reader.

barts
04-06-2009, 12:08 AM
Is it possible for the OS software to use the Nvidia FPU in that manner, similar to the cell processor on the PlayStation?

Actually, the OS really doesn't get involved other than to set up device mappings, AFAIK...

The programming model for CUDA is ... tricky... in terms of synchronization
for good performance; I'd guess for POV-ray you're better off w/ programming the main CPU.

BTW, my son and I built a quad core i7 (Nehalem) for my daughter this past Christmas; w/ 6G of triple channel DDR3 it's very fast indeed.

- Bart

Evan
04-06-2009, 01:10 AM
I checked it out and POV-Ray hasn't been ported to the cuda. It probably won't be either because the main ray trace loop in PR runs at a very high level in the code and so involves a great deal of very complex code. POV-Ray has been in development for longer than nearly any other program for the PC that is still a current product. I first used it on an Amiga 500 in about 1985 or so.

Some of the work people turn out with it is just amazing. It also has full animation capability. The programming language is what they call "Turing strong" meaning that it is a real programming language. Also, as far as anybody knows it is also the only language that produces executable (interpreted) code with which you cannot write a virus.

lazlo
04-06-2009, 12:41 PM
Thanks Robert. POV-Ray is currently at version 3.6 with 3.7 sheduled soon. 3.7 is fully multithreaded and so will take advantage of whatever is available.

Here's a benchmark from Anandtech showing how impressive Nehalem is: ~ 60% faster on POV-Ray than a quad-core Penryn at the same frequency. An that's without recompliing POV-Ray for SSE4:

http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3448&p=9&cp=4
http://i164.photobucket.com/albums/u15/rtgeorge_album/Nehalem.gif


I will look at the prices here on Tiger Direct and see what they have and if I can afford it. I'm on a budget but computer parts are really cheap right now. What's the next best in your opinion?

You can get a Penryn (Core 2 Duo) for a lot less. Penryn was originally supposed to be an optical shrink of Merom, but they added a radix-16 divider and a subset of the SSE4 instructions. Both my machines at home are quad-core Penryns. The only annoyance is that Intel marketing makes it a pain to tell which 45nm parts are Penryns: the Q9xxx are the Penryn quad-cores, and the E8xxx are the Penryn dual-cores.

dp
04-06-2009, 12:45 PM
I'm in the process of building a 1U Lintel box and have been looking at available quad core cpu's. It's hard to cool those buggers.

lwalker
04-07-2009, 04:54 PM
I'm probably a bit biased since I architected some of the of multimedia (SSE4 instructions),

Seeing stuff like this is why I love the Web :-)

Bonus points for reading it on HSM!

barts
04-07-2009, 05:53 PM
I'm in the process of building a 1U Lintel box and have been looking at available quad core cpu's. It's hard to cool those buggers.

If you were to run OpenSolaris on that, you'd appreciate all the work done to put
Nehalem CPUs into deep C states when idle... this makes the box a lot more efficient at idle....

- Bart

Evan
04-07-2009, 08:02 PM
Prices for a bare bones Nehalem box are way out of my range right now. The cheapest one on Tiger Direct is over $1200 for CPU, Motherboard, Case and Ram. For that price I could buy 3 Athlon Phenoms and network them. I wish I could afford it but not now, for sure. I have to get something as my main machine motherboard is going flaky and I'm running it at half speed to keep it going, which really sucks.

I think I might just go cheap for now and upgrade later when my pension kicks in in September.

lazlo
04-07-2009, 10:27 PM
If you were to run OpenSolaris on that, you'd appreciate all the work done to put Nehalem CPUs into deep C states when idle... this makes the box a lot more efficient at idle...

C6 is amazing. When the power architect first described it to me (an on-die microcontroller to marshal the cores in and out of sleep state) I thought it was obscenely complicated. But Nehalem is the first multi-core part that can individually power throttle the cores. The microcontroller also keeps track of overall chip temperature and will "overclock" the cores if it's safe.

That means that if you have a single threaded app running, Nehalem will shut down 3 of the 4 cores, and apply the entire power budget to running the remaining core at the highest frequency possible.


I'm in the process of building a 1U Lintel box and have been looking at available quad core cpu's. It's hard to cool those buggers.

To paraphrase Spiderman (actually, Uncle Ben): with great power comes great heat :D

Seriously, it's 130W TDP (thermal design point). If you divide the benchmark performance by wattage, it's incredibly power efficient. If you don't need 8 threads, get the dual-core version.

aostling
04-07-2009, 10:41 PM
If you don't need 8 threads, get the dual-core version.

Who are these people who need 8 threads? Is this mostly of interest to gamers?

dp
04-07-2009, 11:07 PM
If you were to run OpenSolaris on that, you'd appreciate all the work done to put
Nehalem CPUs into deep C states when idle... this makes the box a lot more efficient at idle....

- Bart

I'd far rather go with Solaris as that is how I've earned my living the last 20 years, but the current state of SUNW/IBM is discouraging. Sun never quite rebounded from being the dot in dot com. All my systems here are Sun Sparc systems, in fact, except my Mac workstations and one RedHat system I used when I worked at Expedia for testing purposes.

My current Solaris 10 project is a firewall I'm building for a guy in Hawaii. The little Netra X1 Sparc boxes are the perfect size and even at 400mHz have no trouble staying ahead of a DSL line, and lordy are they cheap for 1U systems :)

lazlo
04-07-2009, 11:09 PM
Who are these people who need 8 threads? Is this mostly of interest to gamers?

In all honesty Allan -- not many. Game developers started to use 2 - 4 threads around 2006. Xbox 360 has 6 hardware threads, but two of those are support threads for the OS, so XBox 360 games often have 4 threads.

A lot of modern PC games are developed first for the game consoles (XBox360 and PS3) and then ported to the PC platform, so many modern games are threaded (since even an entry-level processor is now dual-core), but very few games will take advantage of 8 hardware threads.

There are a class of "embarrassingly parallel" scientific apps that make good use of as many logical processors as you can provide -- SETI @ Home (a giant, distributed FFT), or Folding @ Home (parallel pattern searches) as well as raytracers like POV-Ray, that Evan's using.

If you develop code, parallel make (make -j8, distcc,...) is fabulous.

dp
04-07-2009, 11:10 PM
Who are these people who need 8 threads? Is this mostly of interest to gamers?

Web servers and backend servers that run a lot of Java virtual machines, base systems that are hosts to multiple virtual machines, search/indexing engines, etc. all benefit from massive threading in the hardware.

barts
04-08-2009, 12:41 AM
Who are these people who need 8 threads? Is this mostly of interest to gamers?

Or people doing software development, running servers, etc.... I've used a desktop w/ 48 threads; that was prob. excessive, although compiles were sure fast :-).

Given that clock rates aren't going to go through the roof,we're likely to see the ever-increasing numbers of transistors used for additional cores/threads. The company I work for (Sun) builds a single chip w/ 64 threads; I expect other chip makers to head in the same direction.

Evan
04-08-2009, 02:08 AM
Given that clock rates aren't going to go through the roof,

Don't count on that. There are specialized signal processors that are clocking over 200 ghz. Theoretically possible clock speed for a given number of transistors goes up exponentially as the areal feature size goes down linearly.

Major improvements are still to be had by packing features vertically in 3D. Then the number of parts within a given distance of each other increases as the cube of the inverse linear feature size decrease.

Removing heat is the main problem of course but we are nowhere near the lower limit of the amount of energy that it requires to represent a change in state of a bit. We don't even know how little that is but it can be shown that it is close to infinitely small.

dp
04-08-2009, 02:30 AM
Frequencies could climb again if graphene or something like it becomes a reality. One of the cpu vendors was looking at radio waves for data transmission in microchips. There's an upper limit constraint to frequency that is the physical dimension of the wafer. At uber high frequencies, propagation becomes an issue. There's also a limit to making things smaller, so finding new ways to make things faster within these physical limits is becoming creative.

lazlo
04-08-2009, 09:16 AM
Don't count on that. There are specialized signal processors that are clocking over 200 ghz. Theoretically possible clock speed for a given number of transistors goes up exponentially as the areal feature size goes down linearly.

Moore's law has been running out of steam since 90nm technology (circa 2004) -- it bit the high frequency, hyperpipelined processors like Willamette/P4 hard. The fundamental problem is that as the Le (the effective gate length, that determines transistor switching speed) shrinks on each process, the current leakage increases proportionally. Leakage is the amount of power the transistors are losing just sitting there, not switching.

For small chips like a million transistor signal processor, this isn't much of a problem. If a million transistors generate 50 Watts of heat because of leakage, so be it. But modern microprocessors and GPU's are well over 2 billion transistors -- that's 2,000 times more transistors. If you project that 50 Watts of heat for a high frequency signal processor to a 2 Billion transistor CPU or GPU, that's 100,000 Watts :D

We can run small sections of 45nm transistors at much faster frequencies, but we can't hope to cool 2 billion transistors running at 10's of Ghz.

If you haven't noticed, processors speeds have gone backwards. A lot. Prescott was shipping at 4 Ghz in 90nm (2 process generations ago). Nehalem is shipping at 2.55 Ghz, and people are complaining about the 130 Watts.

So unless there's some miracle of physics, like room temperature superconductors, don't expect the clock frequencies to improve a whole lot.

In fact, if you remove the hyperpipelined P4 family as science experiment gone bad, processor frequencies have been pretty flat at 2 - 2.5 Ghz since 2004.

Evan
04-08-2009, 10:58 AM
If you haven't noticed, processors speeds have gone backwards. A lot. Prescott was shipping at 4 Ghz in 90nm (2 process generations ago). Nehalem is shipping at 2.55 Ghz, and people are complaining about the 130 Watts.


I have noticed alright. But, that is largely a matter of architecture rather than a fundamental limitation. The fundamental limit is of course the speed of light in whatever medium is doing the conducting. For an ordinary piece of tuned cable such as coax that is about 1/2 of the speed in a vacuum. That works out to a maximum conduction velocity of 2 nanoseconds per foot given the average mean free path. If a die is 1cm across and we can use 4 wavelengths as a minimum length signal path then we have 30 cm per foot divide by 2 = 15 ghz per cm. Divide by 4 for 4 times wavelength impedance matching and what do you know? ~4 ghz. Not a coincidence.

This is where the architecture comes in. By making the various parts of the chip asynchronous and by stacking layers the mean free path can be made shorter and faster. The technology already exists to make the transistors much faster.

Some years ago an IBM Fellow was giving a lecture on this subject. One of the things he reported was that while he wasn't free to give details, he could say that based on their work they saw no fundamental limitations to the progression of Moore's Law for the next 30 years.

I should also point out if the entire die is designed as a tuned circuit then it is possible to use down to 1/4 wavelength conductors giving a 60 ghz limit on a one cm die.

lazlo
04-08-2009, 11:32 AM
The fundamental limit is of course the speed of light in whatever medium is doing the conducting.

This is where the architecture comes in. By making the various parts of the chip asynchronous and by stacking layers the mean free path can be made shorter and faster. The technology already exists to make the transistors much faster.

The limitation is not the speed of light -- we can make the transistors switch much faster than the 2.55 Ghz that Nehalem is shipping at. The problem is the faster they switch, the more heat they generate, and we can't cool 2 billion transistors at faster than 2.55 Ghz.

3D stacking just makes the problem worse -- you can't cool the chip to begin with, and now you're talking about laying another layer of chip on top of it that's even hotter.


based on their work they saw no fundamental limitations to the progression of Moore's Law for the next 30 years.

That's true, but it's pointless. When Intel transitions to 32nm at the end of the year, the number of transistors will double. But the transistor leakage will increase by 28%, so the power problem will get much worse. We can barely cool 2 Billion transistors at 2.5 Ghz, we have no hope of cooling 4 Billion transistors at 2.5 Ghz in 32nm.

So although the optical dimensions will continue to shrink, the process advantages are diminishing.

barts
04-08-2009, 12:06 PM
So unless there's some miracle of physics, like room temperature superconductors, don't expect the clock frequencies to improve a whole lot.

In fact, if you remove the hyperpipelined P4 family as science experiment gone bad, processor frequencies have been pretty flat at 2 - 2.5 Ghz since 2004.

Yup. All indications are processor frequencies will be changing much more slowly than in the past. But there will be plenty of other disruptive effects, which is what has made my job so interesting over the last 20 years....

- Bart

dp
04-08-2009, 12:08 PM
That's true, but it's pointless. When Intel transitions to 32nm at the end of the year, the number of transistors will double. But the transistor leakage will increase by 28%, so the power problem will get much worse. We can barely cool 2 Billion transistors at 2.5 Ghz, we have no hope of cooling 4 Billion transistors at 2.5 Ghz in 32nm.

So although the optical dimensions will continue to shrink, the process advantages are diminishing.

The trick will be in not heating them in the first place. That is going to take some low voltages we've not seen yet, and better conductors than silicone (microwaves, light). Or move the heat where it is easier to manage with strategically placed Peltier-like devices and liquid cooling strategies. These are grossly inefficient, but in massively parallel computing one of the things that goes out the door is cost concerns.

One of the more fascinating areas of manufacturing is how computer components are cooled in earth satellites. All waste heat has to be radiated into space as there's no other means to shed unwanted power. There's some trick stuff going on up there. Again, part of the solution is not not create in the first place.

lazlo
04-08-2009, 12:24 PM
The trick will be in not heating them in the first place. That is going to take some low voltages we've not seen yet, and better conductors than silicone (microwaves, light).

Dynamic power is CV^2f. In other words, it's proportional to the square of the transistor Vt. So we already operate at Vmin -- the lowest possible voltage that the transistors still operate.

Optics, like 3D stacking, is one of those technologies that for 20 years now is "coming real soon."

By the way, I never thought of using fake boobs in a semiconductor Dennis -- we should look into that! :D

barts
04-08-2009, 12:27 PM
It appears that single-thread performance is leveling off quite a bit.... the future is threads, and programmers will have to be dealing w/ concurrency.
Games are already written this way, and enterprise software has been doing this in large scale since the mid 90s....

Look how many processors are in modern graphics cards...


- Bart

dp
04-08-2009, 12:46 PM
Dynamic power is CV^2f. In other words, it's proportional to the square of the transistor Vt. So we already operate at Vmin -- the lowest possible voltage that the transistors still operate.

Optics, like 3D stacking, is one of those technologies that for 20 years now is "coming real soon."

By the way, I never thought of using fake boobs in a semiconductor Dennis -- we should look into that! :D

Heh - you can see where my head is :) Pardon the typo.

This is where the end of the current semiconductor technology is predicted. At these sizes they're no longer semiconductors as you've observed, and that will have to change. A new true semiconductor gate needs to be discovered. Nanotube charge-coupled matrices perhaps?

aostling
04-08-2009, 05:37 PM
A new true semiconductor gate needs to be discovered. Nanotube charge-coupled matrices perhaps?

There is a YouTube at the bottom of this page, showing how carbon nanotubes logic gates are fabricated.

http://soe.stanford.edu/research/ate/wong_philip.html

Evan
04-08-2009, 06:30 PM
Optics, like 3D stacking, is one of those technologies that for 20 years now is "coming real soon."


I have been waiting a long time for a "hang on the wall flat TV". Now we finally have it and what happens? TV isn't worth watching any more.

Present technology is at a turning point since it can't be extended much further. There are plenty of options though. Speed of light limit places a maximum clock rate of about 60 ghz on a 1 cm sq die if it can be operated as a 1/4 wave tuned circuit and around 10 to 15 ghz if it can't. Asynchronous processors can sneak around that by clocking small portions separately. The cooling problem is not even close to being a real limitation, it just needs a different approach such as (insert name of semiconductor here ) on diamond. Diamond is a 4 to 5 times better heat conductor than silver while still remaining an insulator. They have recently learned how to vapor deposit thin film amorphous diamond on just about anything.

There are many approaches to yet faster computation and parallel processing is only one of them. There are many tasks that cannot be effectively processed in parallel (1) so there will be a concerted effort to improve the speed of single CPUs for a long time to come.

1: Iterative calculation of a slowly converging series is an example.

Note: Example for the example: The three body problem in orbital mechanics has had a solution in principle since about 100 years ago. Unfortunately it converges so slowly as to be entirely useless for any sort of practical application.