The source material for rendering is a set of triangles of various sizes, which make up all the objects of the virtual world: landscape, game characters, monsters, weapons, etc. However, by themselves, models created from triangles look like wire frames. Therefore, textures are superimposed on them - colored two-dimensional "wallpaper". Both textures and models are placed in the memory of the graphics card, and then, when each frame of game action is created, a rendering cycle is performed, consisting of several stages.

1. The game program sends to the GPU information describing the game scene: the composition of the objects present, their color, position relative to the viewpoint, lighting and visibility. Additional data is also transmitted that characterizes the scene and allows the video card to increase the realism of the resulting image by adding fog, blur, glare, etc.

2. The GPU arranges 3D models in the frame, determines which of the triangles included in them are visible and cuts off those hidden by other objects or, for example, by shadows.

Light sources are then created and their effect on the color of illuminated objects is determined. This stage of rendering is called "transformation and lighting" (T&L - Transformation & Lighting).

3. Visible triangles are textured using various filtering technologies. Bilinear filtering involves applying two texture versions with different resolutions to a triangle. Its use results in well-defined boundaries between areas of sharp and blurry textures that appear on three-dimensional surfaces perpendicular to the viewing direction. Trilinear filtering, using three variants of the same texture, allows for smoother transitions.

However, as a result of using both technologies, only those textures that are perpendicular to the axis of view look really clear. When viewed from an angle, they are very blurred. To prevent this, anisotropic filtering is used.

This texture filtering method is set in the video adapter driver settings or directly in the computer game. In addition, you can change the strength of anisotropic filtering: 2x, 4x, 8x or 16x - the more "X"s, the clearer the images on inclined surfaces will be. But with an increase in the filtering strength, the load on the video card increases, which can lead to a decrease in the speed of operation and a decrease in the number of frames generated per unit of time.

Various additional effects can be used at the texturing stage. For example, environmental mapping (Enironmental Mapping) allows you to create surfaces that will reflect the game scene: mirrors, shiny metal objects, etc. Another impressive effect is obtained with the use of bump mapping (Bump Mapping), due to which the light falling on the surface at an angle creates the appearance of a relief.
Texturing is the last stage of rendering, after which the image enters the frame buffer of the video card and is displayed on the monitor screen.

Electronic components of the video card

Now that it has become clear how the process of building a three-dimensional image takes place, we can list the technical characteristics of the video card components that determine the speed of the process. The main components of a video card are the graphics processor (GPU - Graphics Processing Unit) and video memory.

GPU

One of the main characteristics of this component (as well as the central processing unit of a PC) is the clock frequency. Ceteris paribus, the higher it is, the faster the data is processed, and therefore the number of frames per second (FPS - frames per second) in computer games increases. GPU frequency is an important, but not the only parameter that affects its performance - modern models manufactured by Nvidia and ATI, which have a comparable level of performance, are characterized by different GPU frequencies.

Nvidia adapters with high performance are characterized by GPU clock speeds from 550 MHz to 675 MHz. The frequency of the graphics processor is less than 500 MHz have "middling" and cheap low-performance cards.
At the same time, GPUs from ATI's "top" cards have frequencies from 600 to 800 MHz, and even the cheapest video adapters do not drop below 500 MHz.

However, while Nvidia's GPUs are slower than ATI's GPUs, they provide at least the same level of performance, and often better. The fact is that other characteristics of the GPU are no less important than the clock frequency.

1. The number of texture modules (TMU - Texture Mapping Units) - GPU elements that perform texture mapping on triangles. The speed of building a three-dimensional scene directly depends on the number of TMUs.
2. The number of rendering pipelines (ROP - Render Output Pipeline) - blocks that perform "service" functions (a couple of examples, pls). Modern GPUs tend to have fewer ROPs than texture units, and this limits overall texturing speed. For example, the Nvidia GeForce 8800 GTX video card chip has 32 texture units and 24 ROPs. The processor of the ATI Radeon HD 3870 video card has only 16 texture models and 16 ROPs.

The performance of texture modules is expressed in such a value as fillrate - texturing speed, measured in texels per second. The GeForce 8800 GTX video card has a fill rate of 18.4 billion tex/s. But a more objective indicator is the fill rate, measured in pixels, since it reflects the speed of the ROP. For the GeForce 8800 GTX, this value is 13.8 billion pixels/s.
3. The number of shader units (shader processors) that - as the name suggests - handle pixel and vertex shaders. Modern games make heavy use of shaders, so the number of shader units is critical to performance.

Not so long ago, GPUs had separate modules for running pixel and vertex shaders. Nvidia's GeForce 8000 series graphics cards and ATI's Radeon HD 2000 adapters were the first to adopt a unified shader architecture. The graphics processors of these cards have blocks capable of processing both pixel and vertex shaders - universal shader processors (stream processors). This approach makes it possible to fully utilize the computing resources of the chip for any ratio of pixel and vertex calculations in the game code. In addition, in modern GPUs, shader units often operate at a frequency higher than the GPU clock frequency (for example, in the GeForce 8800 GTX this frequency is 1350 MHz versus the "general" 575 MHz).

Please note that Nvidia and ATI count the number of shader processors in their chips differently. For example, the Radeon HD 3870 has 320 such blocks, while the GeForce 8800 GTX has only 128. In fact, ATI indicates their component components instead of whole shader processors. Each shader processor contains five components, so the total number of shader units in the Radeon HD 3870 is only 64, which is why this video card is slower than the GeForce 8800 GTX.

Video card memory

Video memory in relation to the GPU performs the same functions as RAM - in relation to the central processor of a PC: it stores all the "building material" necessary to create an image - textures, geometric data, shader programs, etc.

What video memory characteristics affect the performance of a graphics card

1. Volume. Modern games use a huge amount of high-resolution textures, and their placement requires a corresponding amount of video memory. Most of the "top" video adapters and cards of the middle price range produced today are supplied with 512 MB of memory, which cannot be increased later. Cheaper video cards are equipped with half the amount of memory, for modern games it is no longer enough.

In case of low memory, the GPU is forced to constantly load textures from the PC's RAM, which communicates much more slowly, as a result, performance can noticeably decrease. On the other hand, an excessively large amount of memory may not give any increase in speed, since the extra "space" simply will not be used. Buying a video adapter with 1 GB of memory makes sense only if it belongs to the "top" products (ATI Radeon HD 4870, Nvidia GeForce 9800, and the latest GeForce GTX 200 series cards).

2. Frequency. This parameter for modern video cards can vary from 800 to 3200 MHz and depends, first of all, on the type of memory chips used. DDR 2 chips can provide an operating frequency of up to 800 MHz and are used only in the cheapest graphics adapters. GDDR 3 and GDDR 4 memory increase the frequency range up to 2400 MHz. The latest ATI Radeon HD 4870 graphics cards use GDDR-5 memory at a fantastic 3200 MHz.

The memory frequency, like the GPU frequency, has a big impact on the performance of the video card in games, especially when using full-screen anti-aliasing. Ceteris paribus, the higher the memory frequency, the higher the performance, because. the GPU will be less "idle" waiting for data to arrive. The memory frequency of 1800 MHz is the lower limit that separates high performance cards from less fast ones.

3. The bit width of the video memory bus has a much stronger effect on the overall performance of the card than the memory frequency. It shows how much data the memory can transfer in one clock cycle. Accordingly, doubling the memory bus width is equivalent to doubling its clock frequency. Most modern video cards have a 256-bit memory bus. Reducing the bit depth to 128 or, moreover, to 64 bits deals a strong blow to performance. On the other hand, in the most expensive video cards, the bus can be “expanded” to 512 bits (so far only the latest GeForce GTX 280 can boast of this), which turns out to be very handy, taking into account the power of their graphics processors.

Where to find information about the technical characteristics of the video card

If a graphics card has some outstanding parameters (high clock frequency of the processor and memory, its volume), then they are usually indicated directly on the box. But the most complete specifications for video adapters and the GPUs they are based on can only be found online. General information is posted on corporate websites of GPU manufacturers: Nvidia (www.nvidia.ru) and ATI (www.ati.amd.com/ru). Details can be found on unofficial websites dedicated to video cards - www.nvworld.ru and www.radeon.ru. A good help will be the electronic encyclopedia Wikipedia (www.ru.wikipedia.org). Users who buy a card with an eye on overclocking can use the resource www.overclockers.ru.

Simultaneous use of two video cards

In order to get maximum performance, you can install two video cards in your computer at once. Manufacturers have provided appropriate technologies for this - SLI (Scalable Link Interface, used by Nvidia cards) and CrossFire (developed by ATI). In order to take advantage of them, the motherboard must not only have two PCI-E slots for video cards, but also support one of these technologies. Many "motherboards" based on Intel chipsets can use ATI boards in CrossFire mode, but only boards based on chipsets from the same company can combine two (or even three!) video cards from Nvidia into one "team". If the motherboard does not support these technologies, two video cards will be able to work with it, but only one will be used in games, and the second will only make it possible to display an image on a pair of additional monitors.
Note that using two video cards does not double the performance. The average result that you should count on is a 50% increase in speed. In addition, the full potential of the tandem will be revealed only when using a powerful central processor and a high-resolution monitor.

What are shaders

Shaders are microprograms present in the game code that can be used to change the process of building a virtual scene, opening up possibilities that are unattainable using traditional 3D rendering tools. Modern game graphics without shaders is unthinkable.

Vertex shaders change the geometry of 3D objects, making it possible to realize natural animation of complex models of game characters, physically correct deformation of objects, or real water waves. Pixel shaders are used to change the color of pixels and allow you to create effects such as realistic circles and ripples on the water, complex lighting and surface relief. In addition, with the help of pixel shaders, the frame is post-processed: all kinds of “cinematic” effects of blurring moving objects, super-bright light, etc.

There are several versions of Shader Model implementation. All modern video cards support pixel and vertex shaders version 4.0, which provide higher realism of effects compared to the previous - third version. Shader Model 4.0 is supported by the DirectX 10 API, which runs exclusively on Windows Vista. In addition, the computer games themselves must be “sharpened” for DirectX 10.

Do I need an AGP video card for an old system

If your PC motherboard is equipped with an AGP port, the video card upgrade options are very limited. The maximum that the owner of such a system can afford is video cards of the Radeon HD 3850 series from AMD (ATI).

By today's standards, they perform below average. In addition, the vast majority of motherboards with AGP interface support are designed for the outdated Intel Pentium 4 and AMD Athlon XP processors, so the overall system performance will still not be high enough for modern 3D graphics. Only motherboards for AMD Ahtlon 64 processors with Socket 939 should be equipped with new video cards with an AGP port. In all other cases, it is better to buy a new computer with a PCI-E interface, DDR 2 (or DDR 3) memory and a modern CPU.

Material tags: graphic card, video, card, accelerator, graphics

We all know that a graphics card and a processor have slightly different tasks, but do you know how they differ from each other in the internal structure? Like a CPU central processing unit), and GPU (English - graphics processing unit) are processors, and there are many similarities between them, but they were designed to perform different tasks. You will learn more about this from this article.

CPU

The main task of the CPU, in simple words, is the execution of a chain of instructions in the shortest possible time. The CPU is designed in such a way that it can execute several of these chains at the same time, or split one stream of instructions into several and, after executing them separately, merge them back into one, in the correct order. Each instruction in a thread depends on those that follow it, which is why the CPU has so few execution units, and all the emphasis is on execution speed and reducing idle times, which is achieved using cache memory and a pipeline.

GPU

The main function of the GPU is to render 3D graphics and visual effects, therefore, everything is a little simpler in it: it needs to get polygons at the input, and after performing the necessary mathematical and logical operations on them, give the pixel coordinates at the output. In fact, the work of the GPU is reduced to operating on a huge number of independent tasks, therefore, it contains a large amount of memory, but not as fast as in the CPU, and a huge number of execution units: modern GPUs have 2048 or more of them, while like a CPU, their number can reach 48, but most often their number lies in the range of 2-8.

Main differences

The CPU differs from the GPU primarily in the way it accesses memory. In the GPU, it is connected and easily predictable - if a texture texel is read from memory, then after a while the turn of neighboring texels will come. With recording, the situation is similar - a pixel is written to the framebuffer, and after a few cycles, the one located next to it will be recorded. Also, the GPU, unlike general-purpose processors, simply does not need a large cache memory, and textures require only 128-256 kilobytes. In addition, faster memory is used on video cards, and as a result, the GPU has many times more bandwidth available, which is also very important for parallel calculations that operate with huge data streams.

There are many differences in multithreading support: the CPU executes 1 2 computation threads per processor core, and the GPU can support several thousand threads per multiprocessor, of which there are several in a chip! And if switching from one thread to another for the CPU costs hundreds of cycles, then the GPU switches several threads in one cycle.

In the CPU, most of the chip area is occupied by instruction buffers, hardware branch prediction, and huge amounts of cache memory, while in the GPU, most of the area is occupied by execution units. The above device is schematically shown below:

Computation speed difference

If the CPU is a kind of "boss" that makes decisions in accordance with the instructions of the program, then the GPU is a "worker" that performs a huge amount of the same type of calculations. It turns out that if you submit simple independent mathematical problems to the GPU, then it will cope much faster than the central processor. This difference is successfully used by bitcoin miners.

Bitcoin Mining

The essence of mining is that computers located in different parts of the Earth solve mathematical problems, as a result of which bitcoins are created. All bitcoin transfers along the chain are transferred to miners, whose job it is to choose from millions of combinations a single hash that matches all new transactions and a secret key, which will provide the miner with a reward of 25 bitcoins at a time. Since the computation speed directly depends on the number of execution units, it turns out that the GPU is much better suited to perform this type of task than the CPU. The greater the number of calculations made, the higher the chance of getting bitcoins. It even came to the construction of entire farms from video cards.

What do we look at first when choosing a smartphone? Cost aside for a moment, the first thing we choose, of course, is the screen size. Then we are interested in the camera, the amount of RAM, the number of cores and the frequency of the processor. And here everything is simple: the more, the better, and the less, the worse, respectively. However, modern devices also use a graphics processor, also known as a GPU. What it is, how it works and why it is important to know about it, we will describe below.

GPU (Graphics Processing Unit) is a processor designed exclusively for graphics processing operations and floating point calculations. It primarily exists to ease the work of the main processor when it comes to resource-intensive games or applications with 3D graphics. When you play a game, the GPU is responsible for creating graphics, colors, and textures, while the CPU can handle artificial intelligence or game mechanics.

The architecture of the GPU is not much different from that of the CPU, however, it is more optimized for efficient graphics work. If you force the GPU to do any other calculations, it will show itself from the worst side.


Video cards that are connected separately and operate at high powers exist only in laptops and desktop computers. If we are talking about Android devices, then we are talking about integrated graphics and what we call SoC (System-on-a-Chip). For example, the processor has an integrated Adreno 430 GPU. The memory that it uses for its work is system memory, while video cards in desktop PCs are allocated memory that is available only to them. True, there are hybrid chips.

While a processor with multiple cores runs at high speeds, a GPU has many processor cores running at low speeds and only doing vertex and pixel calculations. Vertex processing mostly revolves around the coordinate system. The GPU handles geometric tasks by creating a three-dimensional space on the screen and allowing objects to move around in it.

Pixel processing is a more complex process that requires a lot of computing power. At this point, the GPU overlays various layers, applies effects, does everything to create complex textures and realistic graphics. After both processes are processed, the result is transferred to the screen of your smartphone or tablet. All this happens millions of times a second while you are playing a game.


Of course, this story about the work of the GPU is very superficial, but it is enough to get the right general idea and be able to keep up a conversation with comrades or an electronics seller, or understand why your device got so hot during the game. Later, we will definitely discuss the advantages of certain GPUs in working with specific games and tasks.

According to AndroidPit

Do you know how to choose the preferred graphics processor from the two available options for running an application or game? If not, then I suggest that laptop owners read this article.

Today, even an average laptop in terms of cost and performance comes with two video cards. The first, working by default, is built-in, the second is discrete. The extra GPU is mostly bundled with gaming laptops, but it's not uncommon to find it on a non-gaming rig as well.

Of the built-in ones, the choice is small and usually this is a chip from Intel, but discrete ones can be either from Nvidia or AMD. These are the most common and trusted by users products. And manufacturers, first of all, try to complete the devices based on our preferences.

Now let's briefly consider the process of interaction between two video cards. When the requirements of any running application exceed the capabilities of the built-in card, your system will automatically switch to work with a discrete one. This happens mostly when you start playing games.

As mentioned above, the PC market is dominated by two major GPU manufacturers. It is worth noting that the most widely used Nvidia uses the relatively new “Optimus” technology. Its functionality lies in the fact that whenever it detects that a program or game needs additional, more powerful resources, the dedicated GPU is automatically activated.

And now I'll show you how you can easily force the application to use a high-performance or integrated GPU of the user's choice. This will only be demonstrated today with NVIDIA and Intel.

GPU

Open the NVIDIA Control Panel. The easiest and fastest way is to right-click on the corresponding icon located on the Taskbar in the lower right corner. Go to the "Desktop" menu and check the box next to "Add item to context menu".

Now, after these simple steps, you can right-click on the shortcut of any application and, in the menu item that appears, select one of two launch options.

PERMANENT START

And if you decide to constantly use only a discrete video card, then you need to go to the “Manage 3D settings” section in the Control Panel, select the “Program settings” tab and install the necessary game or program in step 1, and select the desired video card in step 2 , then click on the “Apply” button.

That's all! Visit and check out all available computer tips. Become a member of our FB group where you can get help, participate in discussions or post your point of view.

The integrated graphics processor plays an important role for both gamers and undemanding users.

The quality of games, movies, watching videos on the Internet and images depends on it.

Principle of operation

The graphics processor is integrated into the computer motherboard - this is what the built-in graphics looks like.

As a rule, they use it to remove the need to install a graphics adapter -.

This technology helps to reduce the cost of the finished product. In addition, due to the compactness and low power consumption of such processors, they are often installed in laptops and low-power desktop computers.

Thus, integrated graphics processors have filled this niche so much that 90% of laptops on US store shelves have just such a processor.

Instead of a conventional video card in integrated graphics, the computer's RAM itself often serves as an auxiliary tool.

True, this solution somewhat limits the performance of the device. Yet the computer itself and the GPU use the same bus for memory.

So such a “neighborhood” affects the performance of tasks, especially when working with complex graphics and during gameplay.

Kinds

Integrated graphics have three groups:

  1. Shared-memory graphics is a device based on shared memory management with the main processor. This greatly reduces the cost, improves the energy saving system, but degrades performance. Accordingly, for those who work with complex programs, integrated GPUs of this kind are more likely to not work.
  2. Discrete graphics - a video chip and one or two video memory modules are soldered on the motherboard. Thanks to this technology, image quality is significantly improved, and it also becomes possible to work with three-dimensional graphics with the best results. True, you will have to pay a lot for this, and if you are looking for a high-performance processor in all respects, then the cost can be incredibly high. In addition, the electricity bill will rise slightly - the power consumption of discrete GPUs is higher than usual.
  3. Hybrid discrete graphics - a combination of the two previous types, which ensured the creation of the PCI Express bus. Thus, access to the memory is carried out both through the soldered video memory and through the operational one. With this solution, the manufacturers wanted to create a compromise solution, but it still does not eliminate the shortcomings.

Manufacturers

As a rule, large companies are engaged in the manufacture and development of embedded graphics processors -, and, but many small enterprises are also connected to this area.

It's easy to do. Look for Primary Display or Init Display First. If you do not see something like this, look for Onboard, PCI, AGP or PCI-E (it all depends on the installed buses on the motherboard).

By selecting PCI-E, for example, you enable the PCI-Express video card, and disable the built-in integrated one.

Thus, to enable the integrated video card, you need to find the appropriate parameters in the BIOS. Often the activation process is automatic.

Disable

Disabling is best done in BIOS. This is the simplest and most unpretentious option, suitable for almost all PCs. The only exceptions are some laptops.

Again, find Peripherals or Integrated Peripherals in BIOS if you are working on a desktop.

For laptops, the name of the function is different, and not the same everywhere. So just look for something related to graphics. For example, the desired options can be placed in the Advanced and Config sections.

Shutdown is also carried out in different ways. Sometimes it is enough just to click “Disabled” and set the PCI-E video card to the first in the list.

If you are a laptop user, don't be alarmed if you cannot find a suitable option, you may not have such a function a priori. For all other devices, the same rules are simple - no matter how the BIOS itself looks, the filling is the same.

If you have two video cards and they are both shown in the device manager, then the matter is quite simple: right-click on one of them and select “disable”. However, keep in mind that the display may go out. And, most likely, it will.

However, this is also a solvable problem. It is enough to restart the computer or by.

Perform all subsequent settings on it. If this method does not work, roll back your actions using safe mode. You can also resort to the previous method - through the BIOS.

Two programs - NVIDIA Control Center and Catalyst Control Center - configure the use of a specific video adapter.

They are the most unpretentious in comparison with the other two methods - the screen is unlikely to turn off, you will not accidentally knock down the settings through the BIOS either.

For NVIDIA, all settings are in the 3D section.

You can choose your preferred video adapter for the entire operating system, and for certain programs and games.

In the Catalyst software, an identical function is located in the "Power" option under the "Switchable Graphics" sub-item.

Thus, switching between GPUs is not difficult.

There are different methods, in particular, both through programs and through BIOS. Turning on or off one or another integrated graphics may be accompanied by some failures, mainly related to the image.

It may go out or just appear distorted. Nothing should affect the files themselves in the computer, unless you clicked something in the BIOS.

Conclusion

As a result, integrated graphics processors are in demand due to their cheapness and compactness.

For this, you will have to pay the level of performance of the computer itself.

In some cases, integrated graphics are simply necessary - discrete processors are ideal for working with three-dimensional images.

In addition, the industry leaders are Intel, AMD and Nvidia. Each of them offers its own graphics accelerators, processors and other components.

The latest popular models are Intel HD Graphics 530 and AMD A10-7850K. They are quite functional, but have some flaws. In particular, this applies to the power, performance and cost of the finished product.

You can enable or disable a graphics processor with a built-in kernel, or you can do it yourself through BIOS, utilities and various programs, but the computer itself can do it for you. It all depends on which video card is connected to the monitor itself.