Semantic Segmentors create pixel-perfect masks of a given image and assign each pixel to a class.
The most relevant property of semantic segmentation to remember when implementing it is that semantic segmentors cannot depict the count of objects in an image.
Semantic Segmentors output masks either stored in a JSON format or as png files.
When exporting png's, each pixel gets assigned a color representing the different classes. Usually, the background itself is not annotated and represented by the pixel value 0 what's displayed as black.
Sometimes, semantic masks seem to be all black after visualizing them. This happens when the data structure is defined so that the masks only have one channel instead of three for RGB. In this case, typically, the background class has the value 0, the first class the value 1, etc. When you then try to visualize this in a grey-scale image, all these pixels will look black as the value for white is 255.
If you're unsure if you should use object detection, instance, or semantic segmentation and start labeling your data: if you can count the object, then use instance segmentation. It's easy to use labels created for instance segmentation for object detection or semantic segmentation; the other way around is more tricky.
It is possible to combine semantic segmentation and instance segmentation in one image, then we talk about panoptic segmentation. The idea would be to label the road ahead as a semantic class and the cars as instances. This is not often used in practice, though (yet).