Analog vs Digital
Analog signals are inherently tolerant of “glitching”. As an example, consider how a microphone and speaker work: A microphone picks up vibrations, which are transmitted over a wire, and a speaker responds by reproducing those vibrations. If something happens to the vibrations while they are being transmitted, the speaker will dumbly spit out what it received. Analog video benefits from this same tolerance. Although the signal encoding (NTSC, PAL) is quite a bit more complicated. Thus an analog TV, i.e. a CRT (Cathode Ray Tube) will dumbly spit out what you send to it. It may be a jaggy mess, such as if the horizontal and vertical synchronization signals have been corrupted, but it will just display that corruption. This tolerance of signal corruption, along with the nostalgic aesthetic, is what makes analog video special. Digital signals have narrower opportunities to tolerate glitching, due to their fundamental nature. Digital signals are encoded in “bits”, which are a pulse of voltage within a specific window (0-5v for example), for a specific amount of time. If a signal falls outside of these parameters, it will just fail. Your device may display a “no connection” / blue screen. Digital signals also have error-correction protocols, which further interfere with glitching. Compared to an analog CRT which will display whatever it receives. Note, how we are specifying CRTs. You can feed analog signals into LED / Flatscreen TVs if they have the correct inputs. But they are sampling the video signal and will fail if the analog signal is too glitchy.