Set as Homepage - Add to Favorites

精品东京热,精品动漫无码,精品动漫一区,精品动漫一区二区,精品动漫一区二区三区,精品二三四区,精品福利导航,精品福利導航。

【twitter home sex videos】Navi vs. Turing: An Architecture Comparison

You've followed the rumors and twitter home sex videosignored the hype; you waited for the reviews and looked at all the benchmarks. Finally, you slapped down your dollars and walked away with one of the latest graphics cards from AMD or Nvidia. Inside these, lies a large graphics processor, packed with billions of transistors, all running at clock speeds unthinkable a decade ago.

You're really happy with your purchase and games never looked nor played better. But you might just be wondering what exactly is powering your brand new Radeon RX 5700 and how different is it to the chip in a GeForce RTX.

Welcome to our architectural and feature comparison of the newest GPUs from AMD and Nvidia: Navi vs Turing.

Anatomy of a Modern GPU

Before we begin our breakdown of the overall chip structures and systems, let's take a look at the basic format that all modern GPUs follow. For the most part, these processors are just floating point (FP) calculators; in other words, they do math operations on decimal/fractional values. So at the very least, a GPU needs to have one logic unit dedicated to these tasks and they're usually called FP ALUs(floating point arithmetic logic units) or FPUs for short. Not all of the calculations that GPUs do are on FP data values, so there will also be an ALU for whole number (integer) math operations or it might even be the same unit, that just handles both data types.

Now, these logic units are going to need something to organize them, by decoding and issuing instructions to keep them busy, and this will be in the form of at least one dedicated group of logic units. Unlike the ALUs, they won't be programmable by the end user; instead, the hardware vendor will ensure this process is managed entirely by the GPU and its drivers.

To store these instructions and the data that needs to be processed, there needs to be some kind of memory structure, too. At its simplest level, it will be in two forms: cacheand a spot of local memory. The former will be embedded into the GPU itself and will be SRAM. This kind of memory is fast but takes up a relative large amount of the processor's layout. The local memory will be DRAM, which is quite a bit slower than SRAM and won't normally be put into the GPU itself. Most of the graphics cards we see today have local memory in the form of GDDRDRAM modules.

Finally, 3D graphics rendering involves additional set tasks, such as forming triangles from vertices, rasterizing a 3D frame, sampling and blending textures, and so on. Like the instruction and control units, these are fixed functionin nature. What they do and how they operate is completely transparent to users programming and using the GPU.

Let's put this together and make a GPU:

The orange block is the unit that handles textures using what are called texture mapping units(TMUs) - TA is the texture addressingunit – it creates the memory locations for the cache and local memory to use – and TF is the texture fetch unit that collects texture values from memory and blends them together. These days, TMUs are pretty much the same across all vendors, in that they can address, sample and blend multiple texture values per GPU clock cycle.

The block beneath it writes the color values for the pixels in the frame, as well as sampling them back (PO) and blending them (PB); this block also performs operations that are used when anti-aliasing is employed. The name for this block is render output unitor render backend(ROP/RB for short). Like the TMU, they're quite standardized now, with each one comfortably handling several pixels per clock cycle.

Our basic GPU would be awful, though, even by standards from 13 years ago. Why?

There's only one FPU, TMU, and ROP. Graphics processors in 2006, such as Nvidia's GeForce 8800 GTX had 128, 32, and 24 of them, respectively. So let's start to do something about that....

Like any good processor manufacturer, we've updated our GPU by adding in some more units. This means the chip will be able to process more instructions simultaneously. To help with this, we've also added in a bit more cache, but this time, right next to the logic units. The closer cache is to a calculator structure, the quicker it can get started on the operations given to it.

The problem with our new design is that there's still only one control unit handling our extra ALUs. It would be better if we had more blocks of units, all managed by their own separate controller, as this would mean we could have vastly different operations taking place at the same time.

Now this is more like it! Separate ALU blocks, packed with their own TMUs and ROPs, and supported by dedicated slices of tasty, fast cache. There's still only one of everything else, but the basic structure isn't a million miles away from the graphics processor we see in PCs and consoles today.

Now that we have described the basic layout of a graphics chip, let's start our Navi vs. Turing comparison with some images of the actual chips, albeit somewhat magnified and processed to highlight the various structures.

On the left is AMD's newest processor. The overall chip design is called Navi (some folks call it Navi 10) and the graphics architecture is called RDNA. Next to it, on the right, is Nvidia's full size TU102 processor, sporting the latest Turing architecture. It's important to note that these images are not to scale: the Navi die has an area of 251 mm2, whereas the TU102 is 752 mm2. The Nvidia processor is big, but it's not 8 times bigger than the AMD offering!

They're both packing a gargantuan number of transistors (10.3 vs 18.6 billion) but the TU102 has an average of ~25 million transistors per square mm compared to Navi's 41 million per square mm.

This is because while both chips are fabricated by TSMC, they're manufactured on different process nodes: Nvidia's Turing is on the mature 12 nm manufacturing line, whereas AMD's Navi gets manufactured on the newer 7 nm node.

Just looking at images of the dies doesn't tell us much about the architectures, so let's take a look at the GPU block diagrams produced by both companies.

The diagrams aren't meant to be a 100% realistic representation of the actual layouts but if you rotate them through 90 degrees, the various blocks and central strip that are apparent in both can be identified. To start with, we can see that the two GPUs have an overall structure like ours (albeit with more of everything!).

Both designs follow a tiered approach to how everything is organised and grouped – taking Navi to begin with, the GPU is built from 2 blocks that AMD calls Shader Engines (SEs), that are each split into another 2 blocks called Asynchronous Compute Engines(ACEs). Each one of these comprises 5 blocks, titled Workgroup Processors (WGPs), which in turn consist of 2 Compute Units(CUs).

For the Turing design, the names and numbers are different, but the hierarchy is very similar: 6 Graphics Processing Clusters(GPCs), each with 6 Texture Processing Clusters(TPCs), with each of those built up of 2 Streaming Multiprocessor(SM) blocks.

If you picture a graphics processor as being a large factory, where different sections manufacture different products, using the same raw materials, then this organization starts to make sense. The factory's CEO sends out all of the operational details to the business, where it then gets split into various tasks and workloads. By having multiple, independentsections to the factory, the efficiency of the workforce is improved. For GPUs, it's no different and the magic keyword here is scheduling.

Front and Center, Soldier – Scheduling and Dispatch

When we took a look at how 3D game rendering works, we saw that a graphics processor is really nothing more than a super fast calculator, performing a range of math operations on millions of pieces of data. Navi and Turing are classed as Single Instruction Multiple Data(SIMD) processors, although a better description would be Single Instruction Multiple Threads (SIMT).

A modern 3D game generates hundreds of the threads, sometimes thousands, as the number of vertices and pixels to be processed is enormous. To ensure that they all get done in just a few microseconds, it's important to have as many logic units as busy as possible, without the whole thing stalling because the necessary data isn't in the right place or there's not enough resource space to work in.

When we took a look at how 3D game rendering works, we saw that a graphics processor is really nothing more than a super fast calculator, performing a range of math operations on millions of pieces of data. Navi and Turing are classed as Single Instruction Multiple Data (SIMD) processors, although a better description would be Single Instruction Multiple Threads (SIMT).

Navi and Turing work in a similar manner whereby a central unit takes in all the threads and then starts to schedule and issue them. In the AMD chip, this role is carried out by the Graphics Command Processor; in Nvidia's, it's the GigaThread Engine. Threads are organized in such a way that those with the same instructions are grouped together, specifically into a collection of 32 threads.

AMD calls this collection a wave, whereas Nvidia call it a warp. For Navi, one Compute Unit can handle 2 waves (or one 64 thread wave, but this takes twice as long), and in Turing, one Streaming Multiprocessor works through 4 warps. In both designs, the wave/warps are independent, i.e. they don't need the others to finish before they can start.

So far then, there's not a whole lot different between Navi and Turing – they're both designed to handle a vast number of threads, for rendering and compute workloads. We need to look at what processes those threads to see where the two GPU giants separate in design.

A Difference of Execution - RDNA vs CUDA

AMD and Nvidia take a markedly different approach to their unified shader units, even though a lot of the terminology used seems to be the same. Nvidia's execution units (CUDA cores) are scalarin nature – that means one unit carries out one math operation on one data component; by contrast, AMD's units (Stream Processors) work on vectors– one operation on multiple data components. For scalar operations, they have a single dedicated unit.

Before we take a closer look at the execution units, let's examine AMD's changes to theirs. For 7 years, Radeon graphics cards have followed an architecture called Graphics Core Next (GCN). Each new chip has revised various aspects of the design, but they've all fundamentally been the same.

AMD has provided a (very) brief history of their GPU architecture:

GCN was an evolution of TeraScale, a design that allowed for large waves to processed at the same time. The main issue with TeraScale was that it just wasn't very friendly towards programmers and needed very specific routines to get the best out of it. GCN fixed this and provided a far more accessible platform.

The CUs in Navi have been significantly revised from GCN as part of AMD's improvement process. Each CU contains two sets of:

  • 32 SPs (IEE754 FP32 and INT32 vector ALUs)
  • 1 SFU
  • 1 INT32 scalar ALU
  • 1 scheduling and dispatch unit

Along with these, every CU contains 4 texture units. There are other units inside, to handle the data read/writes from cache, but they're not shown in the image below:

Compared to GCN, the setup of an RDNA CU might seem to be not very different, but it's how everything has been organized and arranged that's important here. To start with, each set of 32 SPs has its own dedicated instruction unit, whereas GCN only had one schedule for 4 sets of 16 SPs.

This is an important change as it means one 32 thread wave can be issued per clock cycle to each set of SPs. The RDNA architecture also allows the vector units to handle waves of 16 threads at twice the rate, and waves of 64 threads at half the rate, so code written for all of the previous Radeon graphics cards is still supported.

For game developers, these changes are going to be very popular.

For scalar operations, there are now twice as many units to handle these; the only reduction in the number of components is in the form of the SFUs – these are special functionunits, that perform very specific math operations, e.g. trigonometric (sine, tangent), reciprocal (1 divided by a number) and square roots. There's less of them in RDNA compared to GCN but they can now operate on data sets twice the size as before.

For game developers, these changes are going to be very popular. Older Radeon graphics cards had lots of potential performance, but tapping into that was notoriously difficult. Now, AMD has taken a large step forward in reducing the latency in processing instructions and also retained features to allow for backwards compatibility for all the programs designed for the GCN architecture.

But what about for the professional graphics or compute market? Are these changes beneficial to them, too?

The short answer would be, yes (probably). While the current version of the Navi chip as found in the likes of the Radeon RX 5700 XT, has fewer Stream Processors that the previous Vega design, we found it to outperform a previous-gen Radeon RX Vega 56 quite easily:

Some of this performance gain will come from the RX 5700 XT higher clock rate than the RX Vega 56 (so it can write more pixels per second into the local memory) but it's down on peak integer and floating point performance by as much as 15%; and yet, we saw the Navi chip outperform the Vega by as much as 18%.

Professional rendering programs and scientists running complex algorithms aren't exactly going to be blasting through a few rounds of Battlefield V in their jobs (well, maybe...) but if the scalar, vector, and matrix operations done in a game engine are being processed faster, then this shouldtranslate into the compute market. Right now, we don't know what AMD's plans are regarding the professional market – they could well continue with the Vega architecture and keep refining the design, to aid manufacturing, but given the improvements in Navi, it makes sense for the company to move everything onto the new architecture.

Nvidia's GPU design has undergone a similar path of evolution since 2006 when they launched the GeForce 8 series, albeit with fewer radical changes than AMD. This GPU sported the Tesla architecture, one of the first to use a unified shader approach to the execution architecture. Below we can see the changes to the SM blocks from the successor to Tesla (Fermi), all the way through to Turing's predecessor (Volta):

As mentioned earlier in this article, CUDA cores are scalar. They can carry out one float and one integer instruction per clock cycle on one data component (note, though, that the instruction itself might take multiple clock cycles to be processed), but the scheduling units organize them into groups in such a way that, to a programmer, they can perform vector operations. The most significant change over the years, other than there simply being more units, involves how they are arranged and sectioned.

In the Kepler design, the full chip had 5 GPCs, with each one housing three SM blocks; by the time Pascal appeared, the GPCs were split into discrete sections (TPCs) with two SMs per TPC. Just like with the Navi design. this fragmentation is important, as it allows the overall GPU to be as fully utilized as possible; multiple groups of independent instructions can be processed in parallel, raising the shading and compute performance of the processor.

Let's take a look at the Turing equivalent to the RDNA Compute Unit:

One SM contains 4 processing blocks, with each containing:

  • 1 instruction scheduling and dispatch unit
  • 16 IEE754 FP32 scalar ALUs
  • 16 INT32 scalar ALUs
  • 2 Tensor cores
  • 4 SFUs
  • 4 Load/Store units (which handle cache read/writes)

There are also 2 FP64 units per SM, but Nvidia doesn't show them in their block diagrams anymore, and every SM houses 4 texture units (containing texturing addressing and texturing filtering systems) and 1 RT (Ray Tracing) core.

The FP32 and INT32 ALUs can work concurrently and in parallel. This is an important feature because even though 3D rendering engines require mostly floating point calculations, there is still a reasonable number of simple integer operations (e.g. data address calculations) that need to be done.

The Tensor Cores are specialized ALUs that handle matrix operations. Matrices are 'square' data arrays and Tensor cores work on 4 x 4 matrices. They are designed to handle FP16, INT8 or INT4 data components in such a way that in one clock cycle, up to 64 FMA (fused multiply-then-add) float operations take place. This type of calculation is commonly used in so-called neural networks and inferencing – not exactly very common in 3D games, but heavily used by the likes of Facebook for their social media analyzing algorithms or in cars that have self-driving systems. Navi is also able to do matrix calculations but requires a large number of SPs to do so; in the Turing system, matrix operations can be done while the CUDA cores are doing other math.

The RT Core is another special unit, unique to the Turing architecture, that performs very specific math algorithms that are used for Nvidia's ray tracing system. A full analysis of this is beyond the scope of this article, but the RT Core is essentially two systems that work separately to the rest of the SM, so it can still work on vertex or pixel shaders, while the RT Core is busy doing calculations for ray tracing.

On a fundamental level, Navi and Turing have execution units that offer a reasonably similar feature set (a necessity born out of needing to comply with the requirements of Direct3D, OpenGL, etc.) but...

On a fundamental level, Navi and Turing have execution units that offer a reasonably similar feature set (a necessity born out of needing to comply with the requirements of Direct3D, OpenGL, etc.) but they take a very different approach to how these features are processed. As to which design is better all comes down to how they get used: a program that generates lots of threads performing FP32 vector calculations and little else would seem to favor Navi, whereas a program with a variety of integer, float, scalar and vector calculations would favor the flexibility of Turing, and so on.

The Memory Hierarchy

Modern GPUs are streaming processors, that is to say, they are designed to perform a set of operations on every element in a stream of data. This makes them less flexible than a general purpose CPU and it also requires the memory hierachy of the chip to be optimized for getting data and instructions to the ALUs as quickly as possible and in as many streams as possible. This means that GPUs will have less cache than a CPU as the more of the chip needs to be dedicated to cache access, rather the amount of cache itself.

Both AMD and Nvidia resort to using multiple levels of cache within the chips, so let's have peek at what Navi packs first.

Starting at the lowest level in the hierarchy, the two blocks of Stream Processors utilize a total of 256 kiB of vector general purpose registers (generally called a register file), which is the same amount as in Vega but that was across 4 SP blocks; running out of registers while trying to process a large number of threads really hurts performance, so this is definitely a "good thing." AMD has greatly increased the scalar register file, too. Where it was previously just 4 kiB, it's now 32 kiB per scalar unit.

Two Compute Units then share a 32 kiB instruction L0 cache and a 16 kiB scalar data cache, but each CU gets its own 32 kiB vector L0 cache; connecting all of this memory to the ALUs is a 128 kiB Local Data Share.

In Navi, two Compute Engines form a Workgroup Processor, and five of those form an Asynchronous Compute Engine (ACE). Each ACE has access to its own 128 kiB of L1 cache and the whole GPU is further supported by 4 MiB of L2 cache, that's interconnected to the L1 caches and other sections of the processor.

This is almost certainly a form of AMD's proprietary Infinity Fabric interconnect architecture as the system is definitely employed to handle the 16 GDDR6 memory controllers. To maximize memory bandwidth, Navi also employs lossless color compression between L1, L2, and the local GDDR6 memory.

Again, all of this is welcome, especially when compared to previous AMD chips which didn't have enough low level cache for the number of shader units they contained. In brief, more cache equals more internal bandwidth, fewer stalled instructions (because they're having to fetch data from memory further away), and so on. And that simply equals better performance.

Onto Turing's hierarchy, it has to be said that Nvidia is on the shy side when it comes to providing in-depth information in this area. Earlier in this article, we saw that each SM was split into 4 processing blocks – each one of those has a 64 kiB register file, which is smaller than found in Navi, but don't forget that Turing's ALUs are scalar, not vector, units.

Next up is 96 kiB of shared memory, for each SM, which can be employed as 64 kiB of L1 data cache and 32 kiB of texture cache or extra register space. In 'compute mode', the shared memory can be partitioned differently, such as 32 kiB shared memory and 64 kiB L1 cache, but it's always done as a 64+32 split.

The lack of detail given about the Turning memory system left us wanting more, so we turned to a GPU research team, working at Citadel Enterprise Americas. Of late, they have released two papers, analyzing the finer aspects of the Volta and Turing architectures; the image above is their breakdown of the memory hierarchy in the TU104 chip (the full TU102 sports 6144 kiB of L2 cache).

The team confirmed that the L1 cache throughput is 64 bits per cycle and noted that under testing, the efficiency of Turing's L1 cache is the best of all Nvidia's GPUs. This is on par with Navi, although AMD's chip has a higher read rate to the Local Data Store but a lower rate for the instruction/constant caches.

Both GPUs use GDDR6 for the local memory – this is the most recent version of Graphics DDR SDRAM – and both use 32-bit connections to the memory modules, so a Radeon RX 5700 XT has 8 memory chips, giving a peak bandwidth of 256 GiB/s and 8 GiB of space. A GeForce RTX 2080 Ti with a TU102 chip, runs with 11 such modules for 352 GiB/s of bandwidth and 11 GiB of storage.

AMD's documents can seem to be confusing at times: in the first block diagram we saw of Navi, it shows four 64 bit memory controllers, whereas a later image suggests there are 16 controllers. Given that the likes of Samsung only offer 32 bit GDDR6 memory modules, it would seem that the second image just indicates how many connections there are between the Infinity Fabric system and the memory controllers. There probably are just 4 memory controllers and each one handles two modules.

So overall, there doesn't seem to be an enormous amount of difference between Navi and Turing when it comes to their caches and local memory. Navi has a little more than Turing nearer the execution side of things, with larger instruction/constant and L1 caches, but they're both packed full of the stuff, they both use color compression wherever possible, and both have lots of dedicated GPU die space to maximize memory access and bandwidth.

Triangles, Textures and Pixels

Fifteen years ago, GPU manufacturers made a big deal of how many triangles their chips could process, the number of texture elements that could be filtered each cycle, and the capability of the render output units (ROPs). These aspects are still important today but as 3D rendering technologies require far more compute performance than ever before, the focus is much more on the execution side of things.

However the texture units and ROPs are still worth investigating, if only to note that there is no immediately discernible difference between Navi and Turing in these areas. In both architectures, the texture units can address and fetch 4 texture elements, bilinearly filter them into one element, and write it into cache all in one clock cycle (disregarding any additional clock cycles taken for fetching the data from local memory).

The arrangement of the ROP/RBs is a little different between Navi and Turing, but not by much: the AMD chip has 4 RBs per ACE and each one can output 4 blended pixels per clock cycle; in Turing, each GPC sports two RBs, with each giving 8 pixels per clock. The ROP count of a GPU is really a measurement of this pixel output rate, so a full Navi chip gives 64 pixels per clock, and the full TU102 gives 96 (but don't forget that it's a much bigger chip).

On the triangle side of things, there's less immediate information. What we do know is that Navi still outputs a maximum of 4 primitives per clock cycle (1 per ACE) but there's nothing yet as to whether or not AMD have resolved the issue pertaining to their Primitive Shaders. This was a much touted feature of Vega, allowing programmers to have far more control over primitives, such that it could potentially increase the primitive throughput by a factor of 4. However, the functionality was removed from drivers at some point not long after the product launch, and has remained dormant ever since.

While we're still waiting for more information about Navi, it would be unwise to speculate further. Turing also processes 1 primitive per clock per GPC (so up to 6 for the full TU102 GPU) in the Raster Engines, but it also offers something called Mesh Shaders, that offers the same kind of functionality of AMD's Primitive Shaders; it's not a feature set of Direct3D, OpenGL or Vulkan, but can be used via API extensions.

This would seem to be giving Turing the edge over Navi, in terms of handling triangles and primitives, but there's not quite enough information in the public domain at this moment in time to be certain.

It's Not All About the Execution Units

There are other aspects to Navi and Turing that are worth comparing. To start with, both GPUs have highly developed display and media engines. The former handles the output to the monitor, the latter encodes and decodes video streams.

As you'd expect from a new 2019 GPU design, Navi's display engine offers very high resolutions, at high refresh rates, and offers HDR support. Display Stream Compression(DSC) is a fast lossy compression algorithm that allows for the likes of 4K+ resolutions at refresh rates more than 60 Hz to be transmitted over one DisplayPort 1.4 connection; fortunately the image quality degradation is very small, almost to the point that you'd consider DSC virtually lossless.

Turing also supports DisplayPort with DSC connections, although the supported high resolution and refresh rate combination is marginally better than in Navi: 4K HDR is at 144 Hz – but the rest is the same.

Navi's media engine is just as modern as its display engine, offering support for Advanced Video Coding (H.264) and High Efficiency Video Coding (H.265), again at high resolutions and high bitrates.

Turing's video engine is roughly the same as Navi's but the 8K30 HDR encoding support may tip the balance in favor of Turing for some people.

There are other aspects to compare (Navi's PCI Express 4.0 interface or Turing's NV Link, for example) but they're really just very minor parts of the overall architecture, no matter how much they get dressed up and marketed. This is simply because, for the vast majority of potential users, these unique features aren't going to matter.

Comparing Like-for-Like

This article is an observation of architectural design, features and functionality, but having a direct performance comparison would be a good way to round up such an analysis. However, matching the Navi chip in a Radeon RX 5700 XT against the Turing TU102 processor in a GeForce RTX 2080 Ti, for example, would be distinctly unfair, given that the latter has almost twice the number of unified shader units as the former. However, there is a version of the Turing chip that can be used for a comparison and that's the one in the GeForce RTX 2070 Super.

?Radeon RX 5700 XTGeForce RTX 2070 Super
GPU | ArchitectureNavi 10 | RDNATU104 | Turing
Process7 nm TSMC12 nm TSMC
Die area (mm2)251545
Transistors (billions)10.313.6
Block profile2 SE | 4 ACE | 40 CU5 GPC | 20 TPC | 40 SM
Unified shader cores2560 SP2560 CUDA
TMUs160160
ROPs6464
Base clock1605 MHz1605 MHz
Game clock1755 MHzN/A
Boost clock1905 MHz1770 MHz
Memory8GB 256-bit GDDR68GB 256-bit GDDR6
Memory bandwidth448 GBps448 GBps
Thermal Design Power (TDP)225 W215 W

It's worth noting that the RTX 2070 Super is not a 'full' TU104 chip (one of the GPCs is disabled), so not all of those 13.6 transistors are active, which means the chips are roughly the same in terms of transistor count. At face value, the two GPUs seem very similar, especially if you just consider number of shader units, TMUs, ROPs, and the main memory systems.

In the Nvidia processor, one SM can handle 32 concurrent warps and with each warp consisting of 32 threads, a fully loaded GeForce RTX 2070 Super can work on 40,960 threads across the whole chip; for Navi, one CU can take up to 16 waves per SIMD32 ALU, with each wave being 32 threads. So the Radeon RX 5700 XT can also be packed with up to 40,960 threads. This would seem to make them exactly even here, but given how differently the CU/SMs are arranged, and Nvidia's advantage with concurrent INT and FP processing, the end result will depend heavily on the code being run.

This will have an impact on how various games performance because one 3D engine's code will favor one structure better than the other, depending on what types of instructions are routinely sent to the GPU. This was evident when we tested the two graphics cards:

All of the games used in the test were programmed for AMD's GCN architecture, whether directly for Radeon equipped PCs or through the GCN GPUs found in the likes of the PlayStation 4 or Xbox One. It's possible that some of the more recently released ones could have prepped for RDNA's changes, but the differences seen in the benchmark results are more likely due to the rendering engines and the way the instructions and data are being handled.

So what does this all mean? Is one architecture really better than the other? Turing certainly offers more capability than Navi thanks to its Tensor and RT Cores, but the latter certainly competes in terms of 3D rendering performance. The differences seen in a 12 game sample just aren't conclusive enough to make any definitive judgment.

And that is good news for us.

Final Words

AMD's Navi plans were announced back in 2016, and although they didn't say very much back then, they were aiming for a 2018 launch. When that date came and went, the roadmap changed to 2019, but it was clear that Navi would be manufactured on a 7nm process node and the design would focus on improving performance.

That has certainly been the case and as we've seen in this article, AMD made architectural changes to allow it to compete alongside equivalent offerings from Nvidia. The new design benefits more than just PC users, as we know that Sony and Microsoft are going to use a variant of the chip in the forthcoming PlayStation 5 and next Xbox.

If you go back towards the start of this article and look again at the structural design of the Shader Engines, as well as the overall die size and transistor count, there is clearly scope for a 'big Navi' chip to go in a top-end graphics card; AMD have pretty much confirmed that this is part of their current plans, as well as aiming for a refinement of the architecture and fabrication process within the next two years.

But what about Nvidia, what are their plans for Turing and its successor? Surprisingly, very little has been confirmed by the company. Back in 2014, Nvidia updated their GPU roadmap to schedule the Pascal architecture for a 2016 launch (and met that target). In 2017, they announced the Tesla V100, using their Volta architecture, and it was this design that spawned Turing in 2018.

Since then, things have been rather quiet, and we've had to rely on rumors and news snippets, which are all generally saying the same thing: Nvidia's next architecture will be called Ampere, it will be fabricated by Samsung using their 7nm process node, and it's planned for 2020. Other than that, there's nothing else to go on. It's highly unlikely that the new chip will break tradition with the focus on scalar execution units, nor is it likely to drop aspects such as the Tensor Cores, as this would cause significant backwards compatibility issues.

We can make some reasoned guesses about what the next Nvidia GPU will be like, though. The company has invested a notable amount of time and money into ray tracing, and the support for it in games is only going to increase; so we can expect to see an improvement with the RT cores, either in terms of their capability or number per SM. If we assume that the rumor about using a 7 nm process node is true, then Nvidia will probably aim for a power reduction rather than outright clock speed increase, so that they can increase the number of GPCs. It's also possible that 7 nm is skipped, and Nvidia heads straight for 5 nm to gain an edge over AMD.

And it looks like AMD and Nvidia will be facing new competition in the discrete graphics card market from Intel, as we know they're planning to re-enter this sector, after a 20 year hiatus. Whether this new product (currently named Xe) will able to compete at the same level as Navi and Turing remains to be seen.

Meanwhile Intel has stayed alive in the GPU market throughout those 2 decades by making integrated graphics for their CPUs. Intel's latest GPU, the Gen 11, is more like AMD's architecture than Nvidia's as it uses vector ALUs that can process FP32 and INT32 data, but we don't know if the new graphics cards will be a direct evolution of this design.

What is certain is that the next few years are going to be very interesting, as long as the three giants of silicon structures continue to battle for our wallets. New GPU designs and architectures are going to push transistor counts, cache sizes, and shader capabilities; Navi and RDNA are the newest of them all, and have shown that every step forward, however small, can make a huge difference.

Shopping Shortcuts:
  • GeForce RTX 2070 Super on Amazon
  • GeForce RTX 2080 Super on Amazon
  • GeForce RTX 2080 Ti on Amazon
  • Radeon RX 5700 XT on Amazon
  • Radeon RX 5700 on Amazon
  • GeForce RTX 2060 Super on Amazon
  • GeForce GTX 1660 Super on Amazon

This article was originally published in August 7, 2019. We've slightly revised it and bumped it as part of our #ThrowbackThursday initiative.

0.1298s , 14443.4296875 kb

Copyright © 2025 Powered by 【twitter home sex videos】Navi vs. Turing: An Architecture Comparison,Info Circulation  

Sitemap

Top 精品国产三级黄色片 | 麻豆精品无人区码一二三区别 | 亚洲av永久无码精品漫画 | 国产又黄又硬又湿又黄的A片小说 | 亚洲综合久久久久久中文字幕 | 久草热在线视频 | 色噜噜国产99性色内射 | 91精品国产综合久久婷婷香蕉狠狠躁夜夜躁人人爽天天天天9 | 精品日韩一区二区三区视频 | 国产精品无码一区二区 | 日本高清在线中字视频 | 人妻中文系列无码专区 | 日韩人妻中文无码一区二区 | 久久久久人妻一区二区三区vr | 国产精品99 | 欧美日韩国产成人高清视频 | 亚洲国产日韩欧美精品一区二区 | 乱人伦人妻中文字幕不卡 | 婷婷国产精品无码一区二区三区 | 六月丁香婷婷综合在线观看 | 国产丝袜精品观看一二三区 | 国标清品久久久久久久久模特 | 国产美女a做受大片免费 | 成人18网址在线观看 | 91久久久久精品无嫩草影院 | 精品国产免费第一区二区三区 | 中文字幕日本人妻久久久免费 | 韩国黄色网址 | 国产欧美日韩一区二区加勒 | 日本视频免播放器 | 精品偷自拍另类在线观看丰满白嫩大屁股ass | 亚洲午夜无码毛片AV久久久久久 | 老熟女重囗味hd | 丁香五月色情久久久久 | 91精品国产综合久久久久 | 久久五月精品中文字幕 | 日本三级带黄 | 欧美另类重口 | 日韩一区二区A片免费观看 日韩一区二区超清视频 | 老司机深夜福利在线观看 | 西西人体做爰大胆性自慰 | 久久精品成人国产午夜 | 国产一卡2卡三卡4卡 | 91香蕉视频网 | 在线看亚洲 | 国产99热在线这里只有精 | 国产网红欧美在线观看 | 久久精品亚洲一区二区无码 | 99久久精品一区二区三区四区 | 国产成人精品无码专区 | av大片| 国产成人无码久久 | 国产精品无码刺激性 | 国产精品青青在线麻豆 | 国产成人久久精品区一区二 | 亚洲精品乱码久久久久66 | 国产成人三级在线视频网站观看 | 欧美国产在线一区 | 久久综合亚洲精品一区二区 | 国内精品久久久久影院中文字幕 | 国产精品美女黄 | 精品人妻一区二区三区久久 | 国产精品良家极 | 偷拍激情视频一区二区三区 | 国产三级在线免费观看 | 中文字幕日本久久2019 | 狠狠色伊人亚洲综合网站l 狠狠色影院 | 欧美三级在线高清不卡 | 久久久国产精品免费 | 日韩av无码久久一区二区 | 乱人伦人妻精品一区二 | www欧美天天直播午夜精品一区 | 伊人色综合视频一区二 | 亚洲国产日韩一区二区A片 亚洲国产日韩一区二区三区精密机械 | 国产毛片久久久久久国产毛片 | 久久久久无码精品国产h动漫 | 日韩欧无码一区二区三区免费不卡 | 77成人网 欧美成人wwe在线播放 | 免费观看的成年网站推荐 | 精品三级黄色片日韩三极片 | 91精品国产现在观看 | 精品丝袜国产自在线拍av婷婷 | 大尺度无码视频国产 | 性做爰片免费视频看 | 人妻免费久久久久久久了 | 无码成人片一区二区三区 | 成人精品综合免费视频 | 国产五月综合网 | 日本精品人妻视频一区二区免费 | 国产美女一级做a爱视频 | 电视在线国产成人av一区二 | 国产精品成人啪精品视频免费观看 | 日韩成人黄色片 | 操逼插逼一区二区三区 | 日本系列1页亚洲系列 | 久久久久亚洲精品中文字幕 | 亚洲无码啊啊啊免费体验 | 久久久不卡 | 亚洲欧美另类在线视频 | 高清中国精品久久无码一区二区三 | 久久欧美成人A片 | 国产亚洲日本在线 | 久久无码一区二区三区少妇 | 久久国产亚洲日韩一本 | 岛国日韩视频一 | 狠狠躁日日躁夜夜躁2024麻豆 | 欧美日韩一区二区三区久久精品 | 波多野结衣一区二区三区 | 成人片在线观看地址KK4444 | 亚洲国产欧美日韩精品一区 | 日韩精品视频在线播放 | a级无码毛片久久18精品 | 精品人妻少妇av免费久久 | 精品无码免费黄色网站 | 偷拍亚洲网友图片区 | 人成乱码熟女夜夜爽77妓女免费看人 | 亚洲精品做爰无码片麻豆 | 亚洲特黄大黄一级毛片 | 国产欧美日韩综合精品一区二区三区 | 国产精品成人午夜电影 | 1区2区3区产品乱码视频 | 人妻体内射精一区二区三区 | 免费高清欧美视频在线 | 日韩免费一区二区三区在线观 | 2024久久国产精品免费热麻豆 | 久久理论 | 亚洲国产成人久久综合碰 | 国产精品综合一区二区在线观看 | 欧美又粗又大又爽的A片 | 亚洲国产专区校园欧美 | 久久男人av资源无码网站 | 美女免费视频一区二区三区 | 久久久久成人亚洲综合精品 | 久久久久免费精品人妻一区二区 | 免费被黄动漫网站在线无网观看 | 精品日本一区二区三区在线 | 鸥美一级黄色片 | 精品在线观看一区 | 国产综合色产在线视频 | 国产成人精彩在线视频50 | 国产av剧情md精品麻豆 | 97se亚洲精品一区二区 | 国产成本人片无码免费2024 | 国产精品兄妹伦理片一区二区 | 欧美日韩人妻精品系列一区二区三区 | 国产精品久久久久精品三级卜 | 天天草综合 | 美利坚综合网 | 国产日韩精品一区二区三区在线 | 精品久久久久中文字幕app | 国产亚洲精品久久久久婷婷图片 | 麻豆果冻传媒av精品一区 | 国产91刮伦脏话对白 | 蜜臀久久99精品久久久久久网站 | 国产欧美精品综合一区 | 激情黄色 | 国产乱子伦视频大全亚琴影院 | 成人精品国产 | 二区三级国产成人精品人人 | 久久精品熟一区二区三区 | 国产91精品免费 | 九色国产人妻 | 中文亚洲乱码 | 国产男人午夜视频在线观看 | 国产精品色字幕综合免费一区二区三区 | 精品日韩视频 | 精品天堂久久久久久无码尤物 | 三妻四妾免费观看 | 忘忧草在线社区WWW日本-韩国 | 男人J放进女人P全黄网站 | 日本网站在线免费一区 | 91久久精品国产一区二区九色 | 色综合天天综合网国产成人网 | 国产一级做a爰片久久毛片男 | 精品国产香蕉伊思人在线在线亚洲一区二区 | 日韩精品福利 | 日本av精品一区二区三区久久 | 国产精品久久久久流白浆软件 | 一区二区 | 2024夜夜干天天天爽 | 色天天色综合 | 福利片一区二区 | jiz zz在亚洲 | 国产精品V无码A片在线看小说 | 日韩一区二区视频在线观看 | 国标清品久久久久久久久模特 | 国产成人av无码永久免费一线天 | 少妇人妻精品一区 | 老太奶性BBWBBW在线观看 | 欧美国产国产综合国产精 | 丝瓜污视频 | 久久久久综合中文字幕 | 被少妇滋润了一夜爽爽爽小说 | 伦理片天堂eeuss影院 | 久久精品九九亚洲精品天堂 | 中文字幕国产综合 | 在线观看国产精 | 女人国产香蕉久久精品 | 黄网站色视频大全免费观看 | 亚洲欧美日韩国产综合在线 | 国产又粗又黄又爽的A片小说 | 欧洲精品无码一区二区三区在线播放 | 亚洲av无码精品一区二区三区 | 精品视频精品国产免费视频 | 精产国品一二三产区99 | 国产午夜精品一区二区在线观看 | 2024年韩国r级理论片在线观看 | 亚洲av无码成人精品区日韩 | 丁香婷婷激情五月 | 国产精品成人无码A片免费软件 | 国产爆乳美女精品视频网站 | 国产精品视频在线 夜间国产热门在线 | 精品国产一区二区av片 | 亚洲欧美日韩成人高清在线一区 | 成人国内精品久久久久影院 | 免费99精品国产自在在线 | 成年女人片免费播放视频 | 波多野结衣国产区42部 | 国语自产拍在线观看偷拍在 | 91香蕉国产亚洲一二三区 | 国产揄拍国产精品 | 国产精品久免费的黄网站 | 久久亚洲精品欧美 | 欧美乱妇狂野欧美在线视频 | 精品综合天天综合人人综合不卡 | 51无码人妻精品 | 国产欧美日韩综合第一区第二区 | 麻豆国产在线观看免费 | 日本亚洲免费无线码 | AV无码一区二区A片成人 | 99精品与95优品 | 中文精品久久久久国产网址 | 久久综合五月开心婷婷深深爱 | 精品视频人妻少妇一区二区三区 | 在线观看视频精品一区 | 国产人在线成免费视频 | 毛片精品一区二区三区中文字幕 | 美女裸身照(无内衣)动态图 | 亚洲爆乳精品无码一区二区三 | 日韩国产免费一区二区三区 | 理论国产无码在线 | 精品成人国产主播第一区 | 中文人妻AV久久人妻水 | 国产亚洲欧洲av综合一区二区三区 | 岛国无码另类视频在线观看网址 | 亚洲色噜噜噜噜噜噜国产 | 日本无码人妻精品一区二区蜜桃 | 美国一级免费毛片 | 欧美视频一区二区三区在线观看 | 二区三级国产成人精品人人 | 国产a久久精品 | 国产精品色吧国产精品 | 中文字幕乱码久久午夜 | 99久久免费精品国产男女性高 | 国产麻豆精品人妻无码A片 国产麻豆精品入口在线观看 | 日韩在线观看不卡视频 | 久久加勒比 | 美国毛片免费观看 | av午夜片无码区在线 | 激情6月丁香婷婷色综合 | 国产精品日韩在线观看 | 3d动漫精品一区二区三区 | 欧美三级A做爰在线观看 | 亚洲 第一区 欧美 日韩 | 久热精品在线视频 | 国产不卡一卡2卡三卡4卡5卡 | 色天天综合久久久久综合片 | 无码纯肉视频在线观看免费 | 久久亚洲中文字幕精品一区 | 在线观看在线播放一区二区三区 | 国产偷国产亚洲偷亚洲高 | 黄色链接在线观看 | 欧美精品免费观 | 天天综合影院 | 麻豆视传媒短视频网站 | 国产激情视频在线 | 亚洲国产精品久久久久久网站 | 久久无码专区国产精品 | 久久99精品久久久久久国产越南 | 免费人妻无码不卡中文字幕系列 | 国产午夜亚洲精品区 | 国产免费久久精品国产传媒 | 日韩在线毛片 | 久久国产亚洲精品麻豆 | 国产美女精品一区二区三区 | 久久免费看少妇高潮A片特 久久免费看少妇高潮A片特黄多 | 老司机精品福利在线资源 | 欧美精品国产日韩综合在线视色 | 国产精品乱码一区二区三 | 国产高清免费观看 | 精品无人乱码一区二区三区的优势 | 国产熟人av一二三区 | 日韩成人黄片免费看 | 国产精品无码加勒比在线 | 天天爽夜夜爽人人爽曰喷水 | 国产三级片视频在线 | 丁香五六月婷婷 | 久久久精品2019中文字幕之3 | 国产欧美日韩一区二区加勒 | 激情综合婷婷丁香五月合色字幕 | 自拍乱伦三级欧美 | 色婷婷亚洲婷婷七月中文字幕 | 无码人妻丰满熟妇区五十路 | 另类专区另类专区亚洲 | 午夜国产一区二区三区精品不卡 | 精品亚洲视频在线 | 国产麻豆国语对白 | 2024年国产精品每日更新 | 自拍青草99视频 | 亚洲日本va中文字幕 | 亚洲精华国产精华精华 | 99久久久国产精品福利姬 | 亚洲精华国产精华液 | av激情亚洲男人 | 国色一卡2卡3卡4卡在线新区 | 久久成人网国产一区 | 国产亚洲精久久久久久无码蜜桃 | 97精品高清一区二区三区 | 亚洲欧美色a片一区二区三区 | 精品国产高清自在线看超 | 麻豆人妻少妇精品无码专 | 制服师生一区二区三区在线 | 精品国产乱码久久久久久人妻 | 国产日产欧产精品精品电影 | 亚洲精品无码mⅴ在线观看 亚洲精品无码mv在线 | 国产三级一二三四五区不卡免费在线观看 | 日本高清不卡在线观看网站 | 欧美日韩精品一区二区另类 | 日韩成人精品视频免费专区 | 亚洲亚洲午夜无码久久久久小说 | av免费在线观看男人得区的天堂 | 好涨好爽好大视频免费 | 精品人妻系列无码天堂 | 扒开双腿被两个男人玩弄视频 | 九九视频免费观看 | 日韩高清的天堂在线观看免费 | 日韩欧美一级视频喷潮 | 精品三级片在线 | 成人精品视频一区二区在线 | 91久久国产综合精品女同国语 | a片日本少妇偷人妻中文字幕 | 99久久精品免费看国产一区二区 | 欧美日韩激情在线观看不卡 | 五月天婷婷丁香在线一区二区 | 制服丝袜中文字幕在线观看 | 国产亚洲精品久久久久久鸭绿欲 | 久久久久久久精品免费久精品蜜桃 | 伊人网国产 | 国产精品自在线拍国产电 | 精品国产乱码久久久久久1区2 | 亚洲美女一区二区三区 | 经典三级| 国产精品成人av久毛片 | 久色亚洲 | 在线播放无码真实一线天 | 久久国产精品久久久久久 | 国产免费伦精品一区二区三区 | 国产成人自啪精品视频 | 亚洲熟女乱综合一区二区 | 思思热久久精品在线6 | 91极品哺乳期女神挤奶在线 | 2024年韩国r级理论片在线观看 | 国产精品成人观看视频免费 | 国产欧美日韩综合精品无毒 | 蜜臀91精品国产免费观看 | 午夜天堂一区人妻 | 欧美日韩国产另类综合在线 | 四虎永久免费地址入口 | 亚洲av无码一区二区三区东京热 | 精品久久久久久久观小说 | 久久久久国产精品免费看 | 亚洲精品国产自在现线 | 日本高清一卡二卡三卡四卡免费 | 国产精品成人自拍 | 国产成人av综合久久视色 | 亚洲精品无码AV久久久久久小说 | 欧美日韩一区二区三 | 2024国产精品极品色在线 | 国产av电影区二区三区 | 日韩在线精品观看视频 | 国产亚洲日韩网曝欧 | 亚洲国产大片在线观看 | 亚洲精品中文字幕在线 | 少妇无码av专区影片 | 国产精品无码专区在线播放 | 亚洲国产精品国语在线 | 少妇人妻无码专区视频 | 久久久久久亚洲综 | 国产精品厕所 | 五月色播影音先锋丁香 | 欧美日韩制服中文视频在线 | 国产日产欧产精品精品 | 国产美女一级视频 | 成人免费无码不卡毛片视频 | 极品少妇被后入内射视 | AV 无码 高潮 在线网站 | 2024最新无码片中文字幕 | 东京一本到熟无码免费视频 | 成人免费AA片在线观看 | 国产美女流白浆的免费视 | 精品亚洲av无码国产一区在线 | 观看一区二区三区 | 一区二区三区内射美女毛片 | 中文字幕无码91加勒比 | 国产a级毛片久久久毛片精片 | 欧美三级网址视频在线看 | 精品一区二区三区中文 | 国产毛a片久久久久久无码 国产毛A片久久久久无码 | 欧美色综合高清视频在线 | 99久久这里只有精品 | av中文字幕一区人妻 | 亚洲蜜芽在线精品一区 | 亚洲bt成人| 91精品国产高清久久久久久伦理片电影免费在线 | 日本免费一区二区在线看片 | 五月六月丁香婷婷激情 | 丁香久久婷婷综合国产午夜不卡 | 日韩欧美国产一区精品 | 国产999视频在线播放 | 国产精品无码无片在线观看 | 丰满少妇一级毛片免费播放器 | 国产真实乱子伦清晰对白 | 男女AA片免费 | 国产按摩无码在线观看 | 人妻精油按摩bd高清中文字幕 | 精品无码中文视频在 | 国产精品亚洲无码 | 亚洲中文字幕无码爆乳av | 欧美高清在线一区 | 国产精品爆乳奶水无码视频国产 | 亚洲色无码专区在线 | 国产欧美精品区区一区二区三 | 久久久亚洲欧洲日产无码av | 亚洲国产美女视频 | 国产在线拍揄自揄视频菠萝 | 日本 韩国 亚洲 欧美 在线 | 精品久久国产亚洲免费观看 | 国产人妻系列无码专区97SS | 99精品成人无码A片 99精品成人无码A片观看 | a级毛片部免 | 国产成人a亚洲精v品无码 | 国产精品露脸国语对白 | 欧美a级片一区二区在线播放 | 国产91色欲麻豆精品一区二区 | 2024国产激情视频在线观看 | 男人的天堂av2024在线 | 少妇三级综合在线观看 | 亚洲av永久无码精品一区二区国产 | 欧美特黄特色三级视频在线观看 | 把手戳进美女尿口里动态图 | 日韩国产中文字幕有码 | 久久精品视频日本国产精品亚洲一区二区麻豆 | 久久精品免费一区二区三区 | 蜜桃臀在线成人亚洲 | 久久国产大片 | 亚洲成人日韩六十熟妇乱子伦视频 | 久久精品亚洲欧美日韩久久国产亚洲一卡二卡 | 六月丁香在线播放 | 午夜精品一区二区 | 色情成人吃奶激情视频在线播放 | 麻豆综合网 | 国产AV亚洲精品久久久久久小说 | 99久久久久久国产精品 | 人妻体验按摩到忍不住哀求继续 | 日韩一区二区三区在线视频观看 | 久久精品国产亚洲麻豆 | 91精品人人妻人人澡人人爽人人精东影业 | 精品伦理熟女国产一区二区 | 2024年国产精品每日更新 | 国产精品成人免费精品自在线观看 | 国产精品538一区二区在线 | 国产乱子伦农村叉叉叉日本免费一区二区三区 | 无码成人午夜在线观看 | 2020亚洲男人天堂精品 | 国产91精品影视在线播放 | 国产亚洲欧美一区二区三区在 | 日韩精品一区二区三区精品 | 青青草国产在线视频 | 亚洲精品乱码久久久久久97 | 美女裸体黄网站18禁免费看影站 | 国产一区二区三不卡高清 | 岛国色情A片无码视频免费看 | 日韩精品无码一本二本三本色 | 91在线天堂 | 国产成年网站 | 国产精品国产三级国产无毒 | 久久久久久久久久鸭 | 2024精品国产自在现线 | 黑人巨茎大战俄罗斯白人美女 | 波多野结衣中文一区 | 国产成人片无码视频在线观看 | 精品国产三级a在线 | 涩涩视频在线播放 | 久久怡红院av | 奇米色欧美一区二区三区永久漫画在线日本软件综合 | 日韩丝袜视频一区二区三区 | 日本免费一区二区在线观看 | 亚洲黄色中文字幕免费在线观 | 久久精品人妻无码一区二区三区 | 好涨好爽好大视频免费 | 日韩美女在线视频一区不卡 | 久久久久久午夜精品 | 国产又黄又爽又色的免费 | 成片一卡二卡三卡观看 | 成人片无码中文字幕免费 | 国产69精品久久久久人妻 | 欧美又大又粗毛片多喷水 | 成人国产精品一区二区小说 | 成人免费视频无码专区 | 国产精品乱码一区二区三区视频 | 精品国产ⅴ无码大片在线观看 | 私密按摩师在线观看 | 自拍视频一区二区国产精品合集一区二区 | 免费看成人的网站软件 | 亚洲日韩av无码精品放毛片 | 久亚洲AV无码专区A片 | 插鸡网站在线播放免费观看 | 欧美日韩亚洲一区二区精品 | 日韩一区二区三区免费高清 | 精品无码成久久久久久 | 精品中文字幕久久久久久 | 国产99热在线观看 | 青青草免费国产线观720 | 不卡一区二区三区在线视频 | 亚洲欧洲自拍偷线高清一区二 | 欧美成人精品区综合A片 | 国产一线免费在线网站 | 另类欧美亚洲 | 国产成人女人视频在线观看 | 久久国产精品热88人妻 | 人与动动物a级毛片中文 | 囯产精品无码成人久久久3p | 日本调教网站 | 国产香蕉视频在线 | 91精品国产免费网站 | 国产成人精品一区二三区在线 | av无码精品久久久久精品免费 | 国产成人99久久亚洲综合精品 | 毛片三级在线观看 | 宅男噜噜噜一区二区 | 精品自拍自产一区二区三区 | 岛国在线观看 | 亚洲熟女片嫩草影院 | 日韩人妻少妇精品系 | 激情综合网五月婷婷 | 天天综合网久久一二三四区 | 精品国产一区二区三 | 97久精品国产片一区二区三区 | 国产原创在线观看 | 欧美日韩亚洲综 | 熟女一区二区三区国产 | 久久人妻精品资源站 | 精品久久久无码人妻中文字幕麻 | 国产糖心vlog传媒小桃酱 | 色色噜一噜 | 亚洲欧美国产日产综合不卡 | 囯产精品久久 | 国产69精品久久久久人妻 | 麻豆av女优免费在线 | 欧美三级在线 | 精品无码国产一区二区三区 | 人妻一区日韩二区国产欧美的无码 | 国产精品亚洲av色欲在线观 | 久久99国产热这里只有精品 | 亚洲av无码成h人动漫 | āV第三区亚洲狠狠婷婷综合久久 | 国产一区二区三区国产精品 | 国产99久久久国产精品小说 | 久久精品免费看国产免费 | 四虎影视永久地 | 亚洲欧洲精品在线无码 | www精品一区二区三区四区 | 国产成人精品久久不卡无码一区二区精品 | 精品国内a人片在线观看 | 国偷自产AV一区二区三区健身房 | 亚洲香蕉毛片久久网站老妇人 | 久久无码av中文出轨人妻 | 欧美日韩国产精品va | 亚洲国产欧美日韩欧美特级 | 国精产品一区一区三区M | 欧美A级肉欲大片XXX | 2024精品极品国产色在线观 | 国产精品亚洲无码 | 人妖精品亚洲永 | 成人区人妻精品一区二区三区 | 国产亚洲一区在线观看一区二区 | 不卡国产00高中生在线视频 | 精品日韩国产一区二区三区 | 久久久久女教师免费一区 | 伊人影视tuunwacom | 1024人妻一区二 | 天天躁日日躁狠狠躁AV麻豆 | 欧美性猛交xxxx免费看蜜桃 | 国产做爰又粗又大又深人物 | 免费观看少妇全黄A片 | 97超级碰碰人妻中文字幕 | 欧美人妻一区二区三区 | 东京热中文成av人片久久 | 国产成人无码精品久久二区三区 | v无码中文字幕 | 国产传媒在线观看视频免费观看 | 亚洲日本香蕉视频观看视频 | 国产精品一区二区资源 | 亚洲人妖女同在线播放 | 波多野结衣爽到高潮漏水大喷视频 | 国产成人免费 | 在线视频一二三区 | 欧美午夜视频一区二区三区 | 本一道色欲综合网中文字幕 | 日本视频一区二区 | 国产麻豆精品乱码一区详情介绍在线观看 | 久久国产精品一区 | 青青操国产| 国产精品综合av一区二区国产 | 美日韩在线 | 亚洲欧美综合区丁香五月小说 | 国产av日韩a亚洲av软件 | 成人免费a级毛片无码片在线 | 久久久精品天堂无码中文字幕 | 一本大道伊人AV久久乱码 | 本地卖淫自拍偷拍视频 | 国产毛片儿 | 亚洲午夜AV久久久精品影院色戒 | av无码国产精品麻豆天美 | 日韩美女欧美精品 | 视频一区二区三区蜜桃麻豆 | 亚洲国产成人久久午夜 | 精品久久国产综合婷婷五月 | 日本xxwwwxxxx | 日韩精品激情中文一区 | 国产麻豆精品入口在线观看 | 国产精品色一区二区三区 | 久久黄色网 | 色一伦一情一区二区三区 | av无码不卡一区二区三区 | 91精品国产乱码在线观看 | 午夜视频在线免费观看 | 久久久久综合蜜桃 | 成人亚洲a片v一区二区三区蜜月49章 | 免费国产黄网站在线观看品善网 | 精品无码网址免费不卡 | 精品国产三级a在线欧 | 国产亚洲精品福利在线 | 久久久久精品影院无 | 久久精品国产欧美日韩亚洲 | 亚洲日韩av无码中文 | 国产精品内射久久久久欢欢 | 精品高潮呻吟99AV无码 | 一夲道人妻熟女AV网站 | 国产伦精品一区二区三区妓女 | 四虎国产在线视频网站 | 精品丝袜国产自在线拍免费看 | 法国艳妇laralatexd在线观看 | 精品人妻少妇二区三区 | 国产精品久久毛片A片杨颖 国产精品久久免费视频 | 精品福利一区二区在线观看 | 久久久久超碰综合亚洲 | 日韩av无码免费播放 | 免费看国产黄线在线观看 | 欧美乱妇 | 国产韩国精品一区二区三 | 98色精品视频在线 | 成人做爰免费看视频韩国 | 日本人妻和老头中文字幕 | 日本无码欧美激情在线视频 | 日本美女家庭教师黄色网站 | 啪啪啪邪恶动态图 | 欧美色欲激情视频一区二区三区 | 2024高清国产一道国产免费播放 | 日本a∨不卡在线一区二区 日本a∨东京热高清一区 | 激情综合色综合啪啪开心 | 亚洲精品久久蜜臀AV色欲 | 精品久久久爽爽久久久AV | 丰满人妻妇伦又伦精品国产 | 国产成人久久精品区一区二区 | 亚洲巨乳日本无码一二三区 | 亚洲成年人网址 | 国产高清a| 亚洲国产一区二区三区四区色欲 | 一区二区三区不卡在线观看 | 欧美日韩人妻精品一区二 | 精品国产仑片一区二区三区 | 性xxxx欧美老妇胖老太性多毛 | 久久久久无码精品 | 91精品在线播放视频大全在线观看 | 国产欧美第一精品 | 国产偷国产亚洲偷亚洲高 | 日本无码高清在线电影 | 免费看少妇高潮A片特黄 | 乱肉怀孕系列小说 | 国内精品久久久久鸭 | 久草免费新视频14 | 老女老肥熟国产在线视频 | 少妇人妻在线无码天堂视频 | av色综合网站 | 麻豆国产精 | 好硬啊一进一得太深了A片 好涨好爽好大视频免费 | 国产成人h在线观看网站站 国产成人h在线视频 | 国产女主播一二三区丝袜 | 久草国产在线播放 | 国产免费一卡二卡三卡四卡 | 一级网站草莓视频亚洲精品成人小视频 | 91精品国产综合久久 | 精品人妻少妇一区偷拍视频 | 国产伦精品一区二区三区视频欲 | 无码av蜜臀aⅴ色欲在线观看 | 亚洲一区二区无码视频 | 国产精品真人一级a爱做片高潮 | 欧美日产成人高清视频 | 91欧洲在线视精品在亚洲 | 久久国产欧美另类久久久 | av天堂天堂av在线 | 亚洲无码电影院高清在线播放 | 久草视频免费看 | 国产真实乱人偷精品人妻图 | AV无码偷拍在线观看 | 成人午夜一区二区三 | 国产欧美日韩精品a在线观看高清 | 91精品国产丝袜美腿在线 | 久久久精品人妻一区二区三区图 | 久久久久精品久久久 | 无码日本精品一区二观看 | 国产精品迪卡侬在线观看 | 国产成a人亚洲精ⅴ品无码性色 | 国产精品日韩欧美亚洲另类 | 成人区精品一区二区不卡 | 国内最新免费一区二区三区 | 国产成人精品久久久久免费精品久久亚洲高清不卡 | 91制片厂制作果冻传媒168中字 | 玖玖99视频 | 大帝a无码视频在线播放 | 内射白浆一区二区在线观看 | 亚洲精品视频在线播放 | 国产乱码精品一区二区三区四川 | 成人性爱视频在线观看 | 在线看国产一区二区三区 | 婷婷激情久久 | 精品无人区一码二码三码四码 | 久久久久夜色精品国产明星 | 日韩精品人妻一区二区中文 | caotube超碰| 欧美日韩国产综合另类 | 亚洲无人区码一码二码三码四码 | 中文字幕亚洲无码在线 | 国产真实野战在线视频 | 2024亚洲va在线va国产 | 国产中文字幕亚洲 | 国产精品无码专区 | 亚洲欧美另类久久久精品播放的 | 国产全部av免费在线 | 久久久久久亚洲综合最大 | 国产精品成人自拍在线观看 | 99国产精品久久人妻 | 一级大片 | 无码人妻中文在线佐佐木明希 | 国产成人无码视频 | 国产午夜精品一 | 国产AV亚洲精品久久久久软件 | 中文字幕免费观看视频 | 无码人妻精品一区二区三区久久 | 美女脱18以下禁止看免费 | 高清一区二区三区欧美激情 | 精品亚洲av无码一区二区 | 欧美久久综合性欧美欧美亚洲综合视频 | 中文字幕人妻无码系列第 | 国产乱子伦视频大全亚洲欧美 | 国产精品爆乳奶水无码视频国产 | 动漫精品亚洲一区二区 | 免费看黄色一级 | 国产三级级在线观看大学生 | 丁香社区五月开心激情婷婷 | 日韩一级精品视频在线观看 | 日韩精品一区二区三区乱码 | 亚洲精品有码在线观看 | 国产中文字幕在线观看视频 | 国产偷录视频叫床高潮 | 国产特级毛片AAAAAAA高清 | 观看一区二区三区 | 在线精品国自产拍 | 日韩手机在线免费视频 | 国产真人性做爰视频免费40分钟 | 久久久久久国产精品视频 | 久久精品A片20242024 | 国产精品无码加勒比在线 | 囯产丰满肉体A片 | 国产亲妺妺乱的性视频播放 | 国产婷婷理论在线观看 | 国产三级电影在线观看 | aⅴ一本色逼1区2区视频 | 国产又爽又大又黄A片另类软件 | 免费a级毛片无码a∨蜜芽按摩 | 国产无码在线观看二区三区 | 精品欧美亚洲日韩天堂一区二区三区在线 | 久久久久久久久真人一级毛片一级黄色毛片91精品 | 2024久久伊人精品中文字幕有 | 国产情侣免费在线视频 | 91精品一区国产高清在线 | 久久久久人妻一区二区三区vr | 国产成人精品久久久亚洲综合一区 |