Skip to content

Scoping/Discovery: GPU accellerated display #188

@Nine-H

Description

@Nine-H

why?

  • blitting logic is arcane
  • to free up as much CPU as possible for DSP
  • potential gains in energy efficiency
  • scales better than software rendering: same ram and cpu for all resolutions
  • opens the door to better UX/UI in future
  • get a killer feature over picotracker

roadmap

  • create gl adapter
  • link and compile with opengl
  • draw a full screen quad and display pixel shaders
  • render some arbitrary text to screen
  • render text from screen buffer
  • fg/bg colors
  • full reimplementation of software UI without regressions

rough design

  • create opengl context
  • convert font to texture and upload to gpu
  • char array to hold characters (uniform?)
  • inverted array to hold whether characters are inverted?
  • vec3 array to hold rgb color lookup table (uniform?)
  • integer array to hold color indexes (uniform?)
  • create a fullscreen quad to use as the render target
  • convert character and string writing functions to mutate the char and color arrays instead of doing a blit into the backbuffer
  • every frame make an opengl draw call on fullscreen quad:
    • vert shader:
      • passthrough variables to frag shader
    • frag shader:
      • use character array to determine character pixel is on
      • calculate texture space offsets (should this be a precalculated lookup?)
      • use offsets to get brightness from font texture
      • multiply brightness by -1 and add 1 if highlighted/inverted
      • use color index array to determine index of color character pixel is on
      • use color index to get rgb color value
      • multiply brightness by color to get final pixel color

potential pitfalls

  • we don't have profiling:
    • we don't know how much time we're spending drawing
    • are frag shader based tilemaps actually going to be faster than software?
  • how much are we painted into a corner by legacy?
  • is this the optimal way to draw bitmap fonts?
  • can we look at the design of high performance terminal emulators for optimal fixed width text?
  • might hit a brick wall debugging/finalizing this, should GL UI actually avoid parity with software

out of scope

the following should be their own projects

  • user setting to enable hardware rendering
    • it would be nice to pick between software, hardware, tui for testing/as a fallback
    • it would be nice to not have to ship different binaries for hardware and software UI
    • so much refactoring though
    • we should probably clean up/prune Adapters namespace and IFDEF's first
  • other platforms:
    • I'm targeting linux mesa drivers and using frag shaders
    • embedded linux:
      • proprietary BLOB drivers may work, will be unsupported
      • not digging into framebuffers, fbcon, dma, kernel, anything for MVP
    • windows:
      • I don't own a windows PC
      • I don't have reliable free time to coordinate volunteer testers
      • previously failed to successfully cross compile
      • by all accounts openGL is a mess on the platform
    • mac os:
      • I don't own a recent enough mac
    • PSP/console:
      • consoles often use fixed function T&L chips not general purpose GPUs
      • openGL on PSP doesn't have pixel/frag shaders
      • probably requires vertex manipulation to be performant
      • don't want to use platform specific libraries/toolchains
      • I prefer to support open platforms
      • there's no general solution: every platform is going to need different data, different algorithms, and a different allocation of what work is done on gpu and cpu
  • allowing font/theme change on the fly
    • easy enough when using uniforms
    • requires additional design for relevant menu/interface
    • requires significant refactoring to add additional API to adapters
  • waveform/scope/spectrogram/filter cutoff/oscilloscope visualizations
    • should be straightforward to add flag to display and uniform array to hold data
    • really easy to render
    • should be basically free on opengl
    • requires ui/ux design to integrate
    • touches DSP code
    • requires additional libraries (fft, waveform)
    • will require massive refactoring/cleanup
    • possibly start design after milestone
  • user shaders/high resolution/high res fonts/greyscale fonts/external textures/arbitrary scaling
    • it would be cool to allow users to rewrite the vert/frag shader
    • I don't want to deal with:
      • file handling
      • shader code getting munged by xml
      • phantom audio bug reports from users running heavy shaders
      • off by one/alignment errors
      • creating and documenting new settings
      • diverging too far from software renderer in MVP
  • screen transitions
    • some kind of sliding effect could help new users learn layout faster
    • not too hard
      • write last frame before transition to texture
      • use it to draw/blend old screen
      • use matrix math to offset rendering of new screen and old screen
      • use an easing function to animate it https://easings.net/
    • requires refactoring to signal direction/time/transition time
    • diverges from software rendering behavior

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions