Next Generation Emulation banner
21 - 40 of 192 Posts
Discussion starter · #21 ·
This is also about 22nm and 3D transistors, but it applies.

http://www.anandtech.com/show/4313/...show/4313/intel-announces-first-22nm-3d-trigate-transistors-shipping-in-2h-2011
At lower voltages Intel is claiming a 37% increase in performance vs. its 32nm process and an 18% increase in performance at 1V. High end desktop and mobile parts fall into the latter category. Ivy Bridge is likely to see gains on the order of 18% vs. Sandy Bridge, however Intel may put those gains to use by reducing overall power consumption of the chip as well as pushing for higher frequencies.
The gains are definitely bigger at the lower end, but not absent on the higher end. With the Core i7 2700K replacing the Core i7 2600K (whether or not the latter goes EOL), and them being even higher binned, I have few doubts that Ivy Bridge won't get over that last hill and make of 5GHz what Wolfdale did of 4GHz. Either way, one of these chips are likely in my future, so here's to hoping.
 
It doesn't apply though. It's up to Intel how they want to balance the 22nm improvements in Ivy Bridge, Haswell, and Silvermont (and Knight's Corner if you really want to be thorough). There's nothing stopping them from spending the entire perf/W improvement on W instead of perf, and you shouldn't hold them to hype they aren't making.

i7-2700K is only 100MHz higher than 2600K, and will still cost substantially more. What you're seeing are the limits of what SB can do at 95W, I imagine Intel has to aggressively bin to reach 2700K's levels.

When you mention 4GHz for Wolfdale and hoping for 5GHz for IB I can only assume that you mean overclocking potential, because there's no way that IB will hit 5GHz at 77W. In that case, the reduced gate delay from 22nm may raise attainable clocks if you can live with the power consumption.
 
Right, so IB might be a better overclocker. Unfortunately there are a lot of things that can limit it even if the pipeline stages are made faster. There have been rumors circulating that Intel's 22nm is having difficulties at higher clocks...

So long as Intel isn't promoting anything officially I'd stay reserved.
 
Discussion starter · #25 ·
Of course Intel won't likely release 4GHz stock parts (or will they...), the power saving are usually a big part of what they go with, but all else being the same, a drop in power needed (voltage) and temperatures takes care of two of the three issues (the three being voltage, heat, and the CPU itself). Overclocking on Sandy Bridge is much less dependent on the motherboard and RAM like it used to be in the FSB days, so those two are all but out too, so all that leaves is the chip now. Many Sandy Bridge CPUs do 5GHz as it is, they simply need a fair bit of voltage and/or run warmer, so I'm rather hopeful Ivy Bridge will all but get over that relatively small hill.
 
What I'm mostly interested is the gpu performance.
I want a thin laptop with great battery life and decent gpu performance.
Will wait for Ultrabooks to use this.
I'd bet that the Macbook Air would have these first before Ultrabooks, because that's how Intel rolls...

Macbook Air also got the first, and still has the best ultra-low-volt Sandy Bridge CPUs around. I don't see anything else in the pipeline with them. The obvious advantage is higher Turbo Boost frequency at the same TDP...

Sometimes one has to wonder if Intel is not conspiring with Apple to make other mobile manufacturers look bad.
 
What I'm mostly interested is the gpu performance.
I want a thin laptop with great battery life and decent gpu performance.
Will wait for Ultrabooks to use this.
Then you gotta go Llano, not Intel.
 
Yeah RAP I might consider a Macbook Air this time around. :p
Just a word of warning. Intel GPU still sucks major donkey's balls, and I just noticed you put GPU as a requirement.

It's the absolutely unthinkably horribly written drivers that are at fault here. The chips are excellent, but graphics performance sucks because the drivers still constantly offload stuffs to the CPU. :( It's a hit-or-miss experience, and thankfully, most modern games don't suffer, but a select number of them still do, and compatibility with older games is like playing Russian Roulette by yourself with 5 bullets!

So as Schumi said, highly consider Llano for GPU performance.
 
Last time Intel tried to pack multiple GPUs together, it never came to the market. :p

Or was it multiple CPUs acting as GPU? Damn, that was "so long ago" that I forgot. :innocent:

Seriously, though, I think Intel and GPU haven't been good friends since forever...
 
i think the Tick+ means the tri-gate transistor technology, first time intel will be using it.

IGP is one of teh ivy bridge's focus but im sceptical if it can even match AMD's Llano APUs, specially the A8-3870K is incoming (higher clocked unlocked quadie core with fast IGP)
 
I don't think the + has anything to do with finFETs.. other ticks have had process improvements beyond just shrinks. HKMG for instance. I think it refers to:

- The minor uarch tweaks, since strictly speaking ticks are supposed to leave the uarch alone, but in practice all but the first one (Cedar Mill) added something or other.
- More likely, the IGP update, which is being considered a "tock" in terms of design changes.
 
Discussion starter · #38 ·
I don't think the + has anything to do with finFETs.. other ticks have had process improvements beyond just shrinks. HKMG for instance.
Stole what I was going to say. The primary point of a tick is that it's a process shrink and will be the "first using XXnm" so it's not that. It could be referring to the 3D transistors, as this is seemingly a bigger deal than 45nm was, but I doubt that's it (or at least all of it) either.
The minor uarch tweaks, since strictly speaking ticks are supposed to leave the uarch alone, but in practice all but the first one (Cedar Mill) added something or other.
I thought Cedar Mill was essentially just Prescott with twice the L2 cache (and the process shrink from 90nm to 65nm)?
 
I thought Cedar Mill was essentially just Prescott with twice the L2 cache (and the process shrink from 90nm to 65nm)?
Yeah that's what I was saying, that Cedar Mill didn't add anything: it was part of the first generation of Intel's tick-tock model, that is, the first set of ticks. There were actually 2MB L2 versions of Prescott so it really was nothing more than a shrink.
 
Discussion starter · #40 ·
Oh, my fault. I read that wrong. I thought you were saying it added something.

45nm added more than just a shrink, and 22nm looks to be as well, I agree on those two, but what did 32nm/Westmere add other than maybe lowered power consumption/leakage/heat? That hardly counts since it comes with the territory.

Westmere was actually slower clock for clock than Nehalem, albeit ever so slightly (using this as my source), so it seems it was just a die shrink and they were actually the same (Nehalem being slightly faster is probably due to cache/bandwidth advantages and not raw IPC being better). So, Westmere/32nm just look like a die shrink and nothing more/less.

Nehalem was like a Pentium 4 that performed; it's heat/power consumption through the roof, so they had to get that under control, and from what I see, they did admirably. I don't see what else was done though. I don't think Cedar Mill/65nm stand as the only die shrink that was basically just a die shrink, but I never fully followed 32nm/Westmere. I haven't heard much of it "adding" anything though.
 
21 - 40 of 192 Posts