Comments Locked

47 Comments

Back to Article

  • rd_nest - Friday, April 3, 2015 - link

    Waiting for S6 full review and M9 final part. It's basically waste of time reading other reviews.
  • lilmoe - Friday, April 3, 2015 - link

    I really hope we get a VERY comprehensive review for the GS6 since it has LOTS of new design wins. I don't mind waiting a week or two longer to get that. We've already seen DOZENS of online reviews and comparisons, but more educated/technical input would be rather appreciated. I'd love to read more about sustained performance of the 7420 and how it compares to the SD810 and the 5433 (ie: the extent of efficiency we're getting out of Samsung's 14nm process vs TSMC/Samsung 20nm). Also, I'd like to see Geekbench, for example, being run 10 times to have a better look at performance degradation when throttling occurs. There seems to be so much inconsistency about reported efficiency/battery life, which means that various use cases and different firmware updates have VERY different battery life expectancies. I bet money the new model/transceiver combo also has a hand in this, and since it's Samsung's first attempt on LTE, it would be nice if compared with Qualcomm's MDM.
    It would also be incredibly useful if Andrei can tinker a bit with the phone and set the resolution at 720p (pretty please) to see the real impact the resolution has on battery life if at all possible (concrete data would shut me up for sure).

    Lastly, CAMERA. If comparison shots are made, PLEASE include samples from a proper DSLR for indoor, outdoor, and low light stills and videos. Samsung's ISP tends to be highly affected by conventional (yellowish) lighting. When I take pictures with my Note4, they tend to be warm/yellowish indoors at night. However, I've recently installed warm white LEDs (~4000K) and the difference was dramatic in the temperature of the shots (much more natural). It was MUCH better than even 6000K florescent lights which also made the camera take yellowish shots, even so the LEDs were warmer.

    Sorry guys, but I'm really setting the bar high for your review :P Oh yea, please let go of Chrome, or better, let go of browser benchmarks all together :D
  • hung2900 - Friday, April 3, 2015 - link

    I really want to see how Samsung improves its implementation of Cortex A5x duo big.LITTLE compared to fhe Exynos 5433, and I also want to see how badly HTC cheated to reduce heat and increasd battery life, leading the M9 is even slower than the M8
  • Samus - Friday, April 3, 2015 - link

    It's interesting how very different Samsung's SoC approach is than Apple's (who makes gigantic dies with only 2-3 cores) and here is Samsung making a tiny die with 4-8 cores. I guess the Android ecosystem is heavily optimized for quad-core, and big.LITTLE, so it makes sense to stick with it, but there is no denying Apple is the performance leader in ultramobile ARM SoC's in "per-core" power, so maybe they are doing something right by making a smaller number of powerful cores?
  • Sunrise089 - Friday, April 3, 2015 - link

    Certainly not exclusively Samsung's fault, but the Android space is heavily optimized for many-core MARKETING. There's been enough incidences of inferior more-core editions of phones, especially internationally, to make it pretty clear what's driving the core counts.
  • lilmoe - Saturday, April 4, 2015 - link

    The thing is, Apple's Cyclone lead over standard Cortex A57 (with similar voltage/tdp) isn't any more than 5-15% ATM. Android can make use of all the cores it can get, while iOS doesn't have much use for too many cores. It's a difference of OS utilization more so than individual apps. Apple's cores have more thermal headroom/boost since there isn't as many, and that's why you're getting faster single threaded performance. It's more about compromises than it is about the underlying tech.

    That said, I wouldn't call it "Apple's lead", far from it actually. Most Core i3s and i5s have better single thread performance than Core i7s (same class/TDP). But we all know which series is faster. So you can't just say that the A8 is faster than Exynos, it's definitely the other way around, and been so for quite a while.
  • Morawka - Saturday, April 4, 2015 - link

    apple's SOC platform lead is genuine. Big ol cache keeps the whole system agile. 4MB of Unified Cache keeps the whole system agile. GPU or CPU, it all has access.

    Apple's SOC designs differ from a57/53 because apple has optimized the layout of the SOC. Using as little wire trace as possible to the most important components. All SOC components are placed for a very specific reason instead of a57/53 which uses auto layout with some tweaks. This is fundamentally why apple soc's are better than everyone else. They control the hardware and the software, and no android OEM is going to have that much control anymore.. Google quit making phones soo..

    iOS can use the extra cores if they wanted to add them. See ipad air 2, where pixolmator utilizes the 3rd core to it's full extent, drasticly decreasing render times when compared to the original iPad Air. The operating system is probably using it as well for background task, but nobody will know for sure unless you work for apple.
  • lilmoe - Saturday, April 4, 2015 - link

    Apple can "afford" to make their own design. They have the volume needed for good pricing on their orders, and they don't have to worry about a fab that needs to keep production going to turn a profit or at least break even. Samsung, among others, can absolutely make their own proprietary/vertically-integrated designs (think Hummingbird), but for some reason, Samsung LSI prefers to make a more generic design that can be sold separately. This was a huge point of criticism for Samsung by many (myself included). There's absolutely no reason why Android can't be optimized for a more vertically integrated approach, but there are more reasons for why Samsung opted for this, the biggest one being Google themselves. Other reasons include built-in LTE modems, and price. Samsung Mobile, LSI, and Electronics have proven to be immature in component collaboration (the clash of the bureaucracy!). Their big boss stepped in last year and "ordered" them to get on it.

    "This is fundamentally why apple soc's are better than everyone else"

    "Better" is a bit ambiguous. "Better for the job at hand" would sound more realistic. And yes, I believe the A8 is better than Exynos, for example, in running iOS. One of many good reasons that's so is because iOS doesn't handle multi-tasking the same way Android does.

    Apple never shares info about their architecture. Most of what you said is guesswork. A more probable difference is that Cyclone (similar to Krait) is designed with a higher clock "range" and various efficiency/performance *levels* in mind. Again, Apple can afford relatively larger dies to hit more performance at lower clock speeds. A57's are mainly designed for higher performance/clocks and not much so optimized for efficiency. That's because big.LITTLE was designed to split the gap in performance/efficiency among different core architectures, thus a combination of A53/A57 (A7/A15 previously). Supposedly, splitting this performance/efficiency variation is "easier" (at the core level) and yields better efficiency and higher performance since the design (of the cores) doesn't need to be too complex (logically, optimizing for one thing should be easier). BUT, that theory isn't as easy in practice, at least not initially (think Exynos 5410). OEMs, like Samsung and NVidia are moving away from it because it adds MORE _collective_ complexity at the arrangement/design levels of the hardware, and more so on the software level. It's arguable that it's cheaper, and more efficient to get rid of all that complexity altogether, spend on designing a core architecture every 2-3 years, reuse that architecture, and utilized the saved space/complexity on improving/adding more features. SRAM is one way for that, and I believe Apple did the right thing in that particular regard.
  • name99 - Saturday, April 4, 2015 - link

    The fundamental reasonS Apple's core is superior include
    (definite)
    - willingness to spend a LOT more transistors than the ARM cores
    - a more aggressive micro-architecture (6 wide, very large re-order and memory buffers)
    (probably)
    - more expensive (in transistors) branch prediction and memory prefetch
    (perhaps)
    - more sophisticated power control (eg automatically tracking whether code is memory limited, and if so shutting down one of the two compute clusters)
  • danbob999 - Sunday, April 5, 2015 - link

    There is so much BS in this comment. As if A57 designs weren't optimized.
  • Jumangi - Monday, April 6, 2015 - link

    Not to near the degree that an Apple A SoC is.
  • fteoath64 - Tuesday, April 7, 2015 - link

    The A57 design is optimized to some degree and probably leaves lots out for partners to handle their own optimizations. Arm wants to save transistors so it does optimization s that saves transistors. Sammy looks like ripping out modules and replacing with others. I doubt they did core CPU pipeline optimizations. To Stuff like Denver or A7/A8 takes too much time they do not have!. So counting on 14NM process works well enough as Intel can attest to.
  • TrojMacReady - Tuesday, April 14, 2015 - link

    Not sure how it's a lead when the SoC is considerably larger (just 78mm² for the 7420), no less power hungry, produces more heat and is slower in most benchmarks and practical tasks and use (except 3D games at native resolution), despite having push up to almost 4 times less pixels than say the S6.
  • name99 - Saturday, April 4, 2015 - link

    "Android can make use of all the cores it can get"

    Please justify this statement. The issue is not "can Android's version of Linux technically support 8 cores", that is obviously true. It's likely just as true that iOS's version of Darwin can technically support 8 cores [since OSX's version of Darwin clearly can].
    The point is: what evidence is there that some significant portion of Android software (the runtime, the frameworks, significant apps) make SIGNIFICANT use of 8 (or even 4) cores.
    The usual benchmarks don't show a massive boost from all these cores. Responsiveness is not significantly improved over iOS. If the best you can say is "games [in some vague generic sense] make significant use of them", well, perhaps (I personally don't give a damn about games); but I don't see evidence of some sort of hard-core Android gamer community that matters significantly to anyone from phone designers to phone revenue trackers.

    As for performance numbers, the single core GeekBench numbers for A8X are basically equal to the lower-end Broadwell-Y (1.1GHz, Turbo to 2.6) in the MacBook, as are the multi-core numbers (2 cores+HT for Broadwell, 3 cores for Apple). The higher end Macbook is maybe 15% faster (I haven't seen benchmarks for it yet), and I expect the A9X will exceed that 15% gap.

    Point is, when subject to the same design constraints (same power budget, same thermals) the A8X is every bit the equal of the Intel chip and probably superior, given the more constrained environment of a tablet vs ultrabook.

    As for your i3, i5 vs i7 rant. I have no idea what you're even saying there, so I'll just quietly ignore it.
  • kyuu - Saturday, April 4, 2015 - link

    Ah yes, let's cherry pick GeekBench numbers (why is that somehow considered the ultimate benchmark to some people?) and ignore that the A8X is definitely not equal to the Broadwell in single-threaded (or multithreaded performance with an equal number of cores) performance according to most every metric in existence.
  • Jumangi - Monday, April 6, 2015 - link

    Comparing a ARM ship to an Intel Core CPU is freaking stupid
  • lilmoe - Saturday, April 4, 2015 - link

    Why is it that an objective (or even subjective) opinion has to always turn into a flame war? Can't you just debate someone's claims without coming off looking as a fanboy or hurt because your favorite company didn't get absolute and affirmative praise? SMH.. I did give credit to their design decisions regarding their platform......

    Let's make this clear:
    - Application processors aren't only about single threaded performance. You can't measure a platform's worth based on specific, one sided metrics.
    - I'm not a huge fan of Core M, but it's definitely not comparable to other "mobile" processors. Atom is, especially the newer ones.
    - Geekbench, among other benchmarks, does NOT run equally, with the same workload, on each and every platform. Stop making one benchmark or two the sole basis of your opinion, especially when they're cross platform. That's similar (albeit not the same) to using "browser" benchmarks to judge cross platform CPU performance (which is ridiculous BTW). Geekbench is good to test the difference of generation upgrades at best.

    --------

    "The point is: what evidence is there that some significant portion of Android software (the runtime, the frameworks, significant apps) make SIGNIFICANT use of 8 (or even 4) cores"

    Dude, you're being paranoid. I'm not saying that iOS isn't capable of handling more cores at the OS/kernel level. Unlike iOS, Android _allows_ ANY task to run in the background without definitive constraints. This means that having more cores definitely benefits the platform. This isn't about particular apps taking advantage of multiple cores either, I'm talking about spreading the workload of DIFFERENT apps/processes running SIMULTANEOUSLY among different cores. Again, by design, iOS does NOT allow apps to run freely in the background, thus my point that iOS doesn't "need" many cores to run optimally, but rather faster cores that dedicated more power to the task in the foreground, and additional co-processors to handle tightly integrated tasks in the background. CPUs running Android can go the same route, BUT doing so would arguably not be optimal; If two Android apps are running simultaneously and are utilizing 60% of CPU power (for example), it would be more faster and more power efficient if 2 cores handle that workload utilizing 30% of each. CPU power scaling isn't linear you know... More cores isn't only about "marketing". This is more about system behavior/design decisions than it is about system capability.

    --------

    "Point is, when subject to the same design constraints (same power budget, same thermals) the A8X is every bit the equal of the Intel chip and probably superior"

    This is highly unlikely. Read above.

    --------

    I can direct you to a Webster definition of the word "rant", and might also elaborate more on my point about i3/5/7. But sure, lets leave it at that since I believe it was clear enough.
  • Speedfriend - Tuesday, April 7, 2015 - link

    @name99

    "As for performance numbers, the single core GeekBench numbers for A8X are basically equal to the lower-end Broadwell-Y"
    Core m 5Y10 gets 2030 single core versus A8X at 1808, thats 12% higher or more than the performance improvement that Apple made between A8 and A7.
    Core M 5Y71 in the T300Chi gets single core 2900 and multi at 5500, which is 60% ahead on single core and 25% ahead on multi-core.
    Given how little the improvement was between A7 and A8, Apple will have to pull off a miracle to close that gap.
  • samlim01 - Saturday, April 4, 2015 - link

    I want Samsung to get a license and do a custom Exynos.
  • Mobile-Dom - Saturday, April 4, 2015 - link

    they already have one, the problem is that custom cores take a hella long time to develop
  • johnnohj - Saturday, April 4, 2015 - link

    Samsung is rumored to be working on a SOC codenamed 'Mongoose' with custom cores (max clock @ 2.3 Ghz) + stock A53 cores in big.LITTLE configuration. Rumors are it has a single-core Geekbench score of 2200.

    http://www.gsmarena.com/next_samsung_exynos_to_pac...
    http://www.sammobile.com/2015/03/18/samsung-expect...
  • Drumsticks - Friday, April 3, 2015 - link

    I second the "set the phone at 720p and test" notion! (I didnt even know that was possible!) it would be really nice to see what kind of battery life the device can get like that.
  • extide - Monday, April 6, 2015 - link

    It's not possible to reduce the power usage of the screen though ... the screen still physically has the same amount of pixels...
  • Andrei Frumusanu - Saturday, April 4, 2015 - link

    I'll be taking a look at those things, but not for the incoming review. I still don't have the devices to test things on.
  • lilmoe - Saturday, April 4, 2015 - link

    Thanks.
  • johnnohj - Sunday, April 5, 2015 - link

    Since the S6 is using a new touch controller, can we see a comparison of the display touch latency with other phones?
    And I wonder what the storage performance is like with full disk encryption.
  • Andrei Frumusanu - Sunday, April 5, 2015 - link

    The Note 4 used the same touch controller, don't expect any difference. And FDE is not enabled on Samsung devices.
  • extide - Monday, April 6, 2015 - link

    FWIW, I have a Note4 and my work recently started requiring FDE, and I haven't really noticed a difference in performance between now and before when it was not encrypted. This is of course on Android 4.4.4 and a stock/not rooted Note 4.
  • johnnohj - Wednesday, April 15, 2015 - link

    http://forum.xda-developers.com/galaxy-s6/general/...
    According to the benchmarks in this post on XDA, there seems to be no noticeable impact on S6 performance after encryption.
    I hope Samsung enables it by default on future devices.
  • coolhardware - Saturday, April 4, 2015 - link

    If you want a top-quality camera, I do not think you will like the M9.

    The only thing lacking on the S6 is speaker quality IMHO. Which is a real bummer coming from the Nexus 6 (which I love the speakers on).

    For me the camera on the S6 really takes it to the next level. Ordered the 128GB on Amazon and selling my Nexus 6 on eBay (I will really miss the speakers though).

    No sales tax in my state and the 3% cashback, it especially brings the 32GB model to a nice price:
    http://amzn.to/1P952Ui
    Especially if you like to stream movies/media rather than storing them on device.

    Lastly, Amazon is doing a $35 app credit with some of the phones which is rather interesting.
  • FlushedBubblyJock - Friday, April 24, 2015 - link

    A friend of mine recently acquired the Moto G - ( cricket ) and it's speakers are tremendously loud, surprisingly loud.

    I just don't get it how on a PHONE they have lousy speakers when they KNOW hands free is nearly required everywhere now and movies and videos and "driving maps" and so many things require the speaker(s) inherently.

    WHAT GIVES HOW ARE THEY SO THICK !?
  • hung2900 - Friday, April 3, 2015 - link

    "so it’s likely that we’re looking at a great deal of optimization in layout and possibly some IP blocks removed in order to reduce die size"

    I think it's hard to not have a major improvement in this node processing, because the GPU also increase from 6 to 8 cores, which occupy a relatively bigger area.
  • Gondalf - Saturday, April 4, 2015 - link

    Or more likely the not so good 20nm process was underutilized to archieve higher yields. It is a common pratice to stay "large" in not critical layers as there are yields issues.
    Anyway around 80mm2 is pretty in line with 14/16nm processes density capability. Intel apparently is more dense with Airmont and it's oversized GPU (many say around 50mm2, but is too early to confirm this data).
  • Gondalf - Saturday, April 4, 2015 - link

    No, from Intel datasheet CherryTrail apparently is 71mm2, still the GPU is four time larger than in Baytrail.
    So both processes are nicely dense in spite of more IP blocks on die.
  • hung2900 - Saturday, April 4, 2015 - link

    You don't get the point. The node 14nm FinFET of Samsung is known as the smaller interconnect but no die shrink (14nm BEOL and 20nm FEOL), and the GPU is still Mali-760 but more cores. Theorically Exynos 7420 is bigger than Exynos 5433 but actually it's much smaller, meaning that there may be an actual die shrink here.

    Baytrail is processed on Intel's 22nm FinFET (22nm BEOL and 26nm FEOL), while CherryTrail is process on Intek's 14nm FinFET (14nm both BEOL and FEOL). The die shrink is significant here.
  • Gondalf - Saturday, April 4, 2015 - link

    I have got the point, still IMO 5433 was manufactured "large" on 20nm to increase the yields. You know that every process has at least three flavours: maximum density, medium density, fast clock density. Likely Samsung chosen medium density for better yields on 20nm, but this time Sam forced the maximum density on this 14nm SKU; after all 7420 comes later than 5344 and sure Samsung has solved some issues on the metallization stack.
    Anyway officially Samsung admit some interesting density advantages of 14nm over 20nm (15% in the best case), but definitively not a full node shrink.
  • Peroxyde - Saturday, April 4, 2015 - link

    The LCD display still has the home screen in color on it while it is already completely disassembled. How is it possible?
  • gfieldew - Saturday, April 4, 2015 - link

    LCD display? It's an OLED display, unless you accidentally commented in the wrong post or something.
  • BillBear - Saturday, April 4, 2015 - link

    Be sure to devote plenty of ink to the S6 #Bendgate problem.

    https://www.youtube.com/watch?v=3Y7tPczbOec
  • Solandri - Sunday, April 5, 2015 - link

    Even in that video, you can see the iPhone 6 *remains* bent after the load is removed. The S6 reverts back to its original shape.

    Of course its screen cracked, but they say the screen continued to function and it was only the glass which failed. Which is why flexible OLED screens are going to eventually win out over LCD on phones. The whole bending issue one of the reasons why plastic is a better material for phones than metal, despite the "premium feeling" misnomer held by millions of people who don't know the first thing about materials science. Better to have a phone which bends and reverts back to its original shape, than a phone which bends and stays bent.
  • FlushedBubblyJock - Friday, April 24, 2015 - link

    Amen Solandri - not to mention that metal is very harsh on the hands compared to plastic.

    But the elitist ego mongers, yeay apple metal..... my GOD how did it ever happen....yeay BROKEN GLASS ON THE BACK !!!

    MY GOD THEY ARE RETARDS.
  • tipoo - Sunday, April 5, 2015 - link

    That doesn't even test what the original iPhone bend problem was about. It's testing them both at the middle of the phone. The problems with that iPhone were about the area around the volume keys.
  • tipoo - Saturday, April 4, 2015 - link

    Wow, 78mm2 for what is pretty much the new performance champion. I wonder what others who will get more spendy with 14nm die size can accomplish then (Apple perhaps?).
  • PC Perv - Saturday, April 4, 2015 - link

    Didn't you previously state how "important" it is to understand that Samsung's 14nm is not "true" 14nm like Intel's? (or TSMC's 16nm for that matter) Because Samsung's 14nm uses the same interconnect as the one used for its 20nm process?

    It is not the same any more?
  • tipoo - Sunday, April 5, 2015 - link

    That's right. Front-end-of-line process vs Back-end-of-line process. Intel's uses 14nm throughout, including interconnect.
  • PC Perv - Tuesday, April 7, 2015 - link

    113mm² to 78mm² can hardly be explained as "20nm with FinFET." Especially not with 2 more GPU clusters.
  • Morawka - Tuesday, April 7, 2015 - link

    Chipworks Confirmed that Samsung's 14nm process uses 3D Transistors (finfet)

    I really hope we get some video cards out on this now that we have a Foundry for hire that can do High Performance Chips at a more advanced lithography process.

Log in

Don't have an account? Sign up now