For mining
Graphics cards specially designed
for cryptocurrency mining(BitCoin, Ethereum, etc.). In this case, it is not just a theoretical possibility to use a graphics card for mining (
many “regular” graphics cards have this possibility), but an optimized design, originally developed taking into account the specifics of the process. Some of these models may be mining-only and may not have video outputs at all.
Mining is the process of "mining" a cryptocurrency by performing special calculations. The technical features of the process are such that in order to achieve maximum efficiency, it is necessary to parallelize the calculations as much as possible. It is thanks to this that graphics cards turned out to be very convenient for mining: the number of individual cores (and, accordingly, parallel computing) in modern GPUs is in the hundreds. Initially, such an application was abnormal, and to transfer the graphics card to the mining mode, one had to resort to various tricks; however, in light of the growing popularity of cryptocurrencies, many manufacturers began to produce video adapters specifically designed for this application.
Memory size
The amount of own memory of the GPU; this parameter is sometimes called the amount of graphics card memory. The larger the amount of GPU memory, the more complex and detailed picture it is able to process in a period of time, and therefore, the higher its performance and speed (which is especially important for resource-intensive tasks like high-end games, video editing, 3D rendering, etc. ).
When choosing, it is worth considering that the performance of a graphics card is affected not only by the amount of memory, but also by its type, frequency of operation (see below) and other features. Therefore, situations are quite possible when a model with less memory will be more advanced and expensive than a more voluminous one. And you can unambiguously compare with each other only options that are similar in other memory characteristics.
On the modern market, there are mainly video cards with memory capacities of
2 GB,
4 GB,
6 GB,
8 GB,
10 GB,
11 GB,
12 GB, and
16 GB or even
more can be installed in the most advanced models.
Memory bus
The amount of data (bits) that can be transferred over the graphics card's memory bus in one cycle. The performance of the graphics card directly depends on the bus width: the higher the bit width, the more data the bus transfers per unit of time and, accordingly, the video memory runs faster.
The minimum bit depth for modern video cards is actually
128 bits, this figure is typical mainly for low-cost models. In mid-level solutions, there are indicators of
192 bits and
256 bits, and in advanced models —
352 bits,
384 bits and more, up to
2048 bits.
GPU clock speed
The frequency of the graphics processor of the graphics card. As a general rule, the higher the frequency of the GPU, the higher the performance of the graphics card, but this parameter is not the only one — a lot also depends on the design features of the graphics card, in particular, the type and amount of video memory (see the relevant glossary items). As a result, it is not unusual for a model with a lower processor frequency to be more performant of two video cards. In addition, it should be noted that high-frequency processors also have high heat dissipation, which requires the use of powerful cooling systems.
Lithography
The process technology by which the graphics card's own processor is made.
This parameter is specified by the size of each individual transistor used in the processor. At the same time, the smaller this size, the more perfect the technical process is considered: reducing individual elements allows you to reduce heat dissipation, reduce the overall size of the processor, and at the same time increase its performance. Accordingly, nowadays, manufacturers are trying to move in the direction of reducing the technical process, and the newer the graphics card, the smaller the numbers in this paragraph can be.
Max. resolution
The maximum resolution supported by the graphics card — that is, the largest image size (in pixels) that it can display on an external screen.
The higher the resolution, the clearer and better the picture is. On the other hand, with an increase in the number of pixels, the requirements for computing power and, accordingly, the cost of a graphics card increase. In addition, do not forget that you can only appreciate the full benefits of high resolutions on monitors with the appropriate characteristics. On the other hand, in the graphics settings, you can set lower resolutions than the maximum; and a good resolution margin means a good overall performance margin.
As for specific values, the actual minimum for modern video cards is 1600x1200, but higher rates are much more common — up to
Ultra HD 4K and
Ultra HD 8K.
Passmark G3D Mark
The result shown by the graphics card in the test (benchmark) Passmark G3D Mark.
Benchmarks allow you to evaluate the actual capabilities (primarily overall performance) of a graphics card. This is especially convenient in light of the fact that adapters with similar characteristics in fact can differ markedly in capabilities (for example, due to the difference in the quality of optimization of individual components for joint work). And Passmark G3D Mark is the most popular benchmark for graphics adapters nowadays. The results of such a test are indicated in points, with a higher number of points corresponding to better performance. As of mid-2020, the most advanced graphics cards can score over 17,000 points.
Note that Passmark G3D Mark is used not only for general performance evaluation, but also to determine the compatibility of a graphics card with a specific processor. The CPU and graphics adapter must be approximately equal in terms of the overall level of computing power, otherwise one component will “pull back” the other: for example, a weak processor will not allow a powerful gaming graphics card to unleash the full potential. To search for a video adapter for a specific CPU model, you can use the list "Optimal for AMD processors" or "Optimal for Intel processors" in the selection of our catalog.
HDMI
The number of HDMI outputs provided by the graphics card.
HDMI is by far the most popular interface for high-definition video and multi-channel audio (it can be used for video and audio at the same time). This connector is almost standard for modern monitors, in addition, it is widely used in other types of screens — TVs, plasma panels, projectors, etc.
The presence of several outputs allows you to connect several screens to the graphics card at the same time — for example, a pair of monitors for organizing an extended workspace. However, there are never more than 2 HDMI ports in video cards — for a number of reasons, for several screens at once, in this case it is easier to use other connectors, primarily DisplayPort.
HDMI version
HDMI interface version supported by the graphics card. For details about HDMI itself, see above, and its versions can be as follows:
— v.1.4. The earliest HDMI standard found in video cards; was introduced in 2009. Despite its “venerable age”, it has good capabilities: it supports 4K video (4096x2160) at a frame rate of 24 fps, Full HD (1920x1080) at a frame rate of up to 120 fps, and is also suitable for transmitting 3D video.
— v.1.4b. The second improvement of the above v.1.4. The first update, v.1.4a, introduced support for two additional 3D video formats; and in HDMI v.1.4b, mostly minor improvements and additions to v 1.4a specifications were implemented, almost imperceptible to the average user.
— v.2.0. Standard introduced in 2013 to replace HDMI v.1.4. Thanks to its full 4K support (up to 60 fps), it is also known as HDMI UHD. In addition, there is enough bandwidth for simultaneous transmission of up to 32 audio tracks and up to 4 separate audio streams, and the list of supported frame formats has been replenished with ultra-wide 21:9.
— v.2.0b. The second update of the HDMI 2.0 standard described above, which differs primarily in HDR support. However, HDR compatibility itself appeared in the first update, v.2.0a; and version 2.0b added the ability to work with HDR10 and HLG standards.
— v.2.1. The newest common HDMI standard released in 2017. Capable of providing a frame rate of 120 fps in ultra-high resolu...tion video signal — from 4K to 8K inclusive; some improvements related to the use of HDR were also provided. Note that all the features of HDMI v.2.1 are available only when using cables marked Ultra High Speed, although basic functions work through ordinary cables.