'Optimus technology' is a software [and possibly hardware] solution for automatically switching between an integrated graphics chip or IGP (such as on onboard intel chip) and a more powerful [nvidia] graphics chip. This technology is intended specifically for laptops. The precursor to this technology was 'switchable graphics,' in which the user could manually switch between the graphics card. It may require that the Nvidia GPU has the PCOPY engine.
The graphics system in a laptop has a GPU with some memory, in the case of an IGP, this memory may be a piece of system memory, but otherwise it is usually dedicated memory living on the GPU. This GPU connects to a laptop display, or output port. There are two main problems to solve in order to support optimus under linux:
1) Currently we do not have a way to a priori know what outputs (displays) are connected to what GPU's.
2) The supposed optimus software, should perform the task of switching between the which of the two graphics processors drives your display. Ideally this would be done by directly flipping a hardware switch, called a mux (multiplexer). However such a mux does not always exist!
If a hardware mux does not exist, there there is no physical way to perform this GPU switching. Thus Optimus is used to effectively "implement" a software mux. Specifically it ensures that relevant data is sent to and processed on the right GPU then the data needed for display is copied to the device that displays the image.
When it comes to how a specific machine is configured, there are a number of possibilities. Again, if the hardware mux exists it would be used to select which GPU drives the internal panel, or the external monitor, or possibly both. It is also possible, that a GPU is hardwired to the internal panel, so the other GPU cannot possibly drive the internal panel. The same goes for external monitor output. In the worst case we have that the:the Intel GPU hardwired to the internal panel and the Nvidia GPU hardwired to the external output! The best case scenario is a mux, which can select which GPU drivers control which outputs.
Basically, you can have any combination of these possibilities. There is no standard on how to wire things. There should be ways to detect the wirings and whether there is a mux and where, but the documentation is not available to the developers (maybe you can help us figure out how to do this, have any ideas? You can also 'petition' nvidia for releasing these specs: nvidia customer help ? )
Switcheroo - Using one card at a time
If your laptop has a hardware mux, the kernel switcheroo driver may be able to set the wanted GPU at boot. There are also hacks based on the switcheroo, like asus-switcheroo, but they offer no extra value. If one of the hacks happens to work, and the switcheroo does not, the switcheroo has a bug. There might already be pending patches waiting to go towards the mainline kernel.
In all other cases, you are stuck with what happens to work by default. No switching, no framebuffer copying. Yet.
'PRIME GPU offloading' is an attempt to support muxless hybrid graphics in the Linux kernel. It requires:
- An updated graphic stack (Kernel, xserver and mesa);
- KMS drivers for both GPUs loaded;
- DDX drivers for both GPUs loaded.
If everything went well, xrandr --listproviders should list two providers. In my case, this gives:
$ xrandr --listproviders Providers: number : 2 Provider 0: id: 0x8a cap: 0xb, Source Output, Sink Output, Sink Offload crtcs: 2 outputs: 2 associated providers: 1 name:Intel Provider 1: id: 0x66 cap: 0x7, Source Output, Sink Output, Source Offload crtcs: 2 outputs: 5 associated providers: 1 name:nouveau
It is then important to tell Prime what card should be used for offloading. In my case, I would like to use Nouveau for offloading the Intel card:
$ xrandr --setprovideroffloadsink nouveau Intel
When this is done, it becomes very easy to select which card should be used. If you want to offload an application to a GPU, use DRI_PRIME=1. When the application is launched, it will use the second card to do its rendering. If you want to use the "regular" GPU, set DRI_PRIME to 0 or omit it. The behaviour can be seen in the following example:
$ DRI_PRIME=0 glxinfo | grep "OpenGL vendor string" OpenGL vendor string: Intel Open Source Technology Center $ DRI_PRIME=1 glxinfo | grep "OpenGL vendor string" OpenGL vendor string: nouveau
Everything seems to work but the output is black
Try using a re-parenting compositor. Those compositors usually provide 3D effects.
WARNING: Currently, Kwin only works when using the desktop effects. In the case where the window would be pure black, please try minimizing/maximizing or redimensioning the window. This bug is being investigated.
Increased power consumption
When using prime, the NVIDIA GPU cannot be put offline which means it drains power even if it is not in use. This problem is being addressed but it isn't entirely trivial to fix this issue. This issue will disapear step-by-step when we add support for power management.
If you do not plan on using your NVIDIA GPU, it is recomended to blacklist the Nouveau module and to use bbswitch to turn off the Nouveau card. Look onto your distribution's wiki for more information.
Poor performance when using the Nouveau card
Right now, Nouveau does not support reclocking and other power management feature. This cripples the performance of the GPU a lot along with increasing the power consumption compared to the proprietary driver.
Using Prime with Nouveau may not result in any performance gain right now, but it should in a not-so-distant future.