BIT
In computing and information technology, a bit is the smallest piece of information. It stands for Binary Digit. Computers use two types of electrical signals to encode information, one is 5 volts, generally and symbolically represented with the digit 1, then the other is the absence of that electrical signal, symbolically represented with 0. And that is like the alphabet to the computer. A system of two symbols is called a "base two" or "binary" system. Each one of the two digits is called a bit. 0 is a bit, and 1 is also a bit.
Example: 0010 is a set of 4 bits.
Although they have two symbols(digits) in their alphabet, computers can still write numbers and make sentences. For example, to represent the decimal number 3, they use 11, for decimal number 4, they use 100. At the same time, the binary number 1000000 can be the decimal 64 or can become the letter A of our alphabet.
What happens is that the value or the meaning of a binary number depends on how we decide to interpret it. The meaning we give to a binary number is the information hidden in it. To illustrate that let's use the example of a traffic light. The traffic light signals us for four main pieces of information: Green for Go, Red for Stop, Orange for Slowdown, then Orange blinking for Hazardous.
Now let use binary numbers to represent the same formation:
00 for Red
01 for Green
10 for Orange
11 for Blinking Orange
Now, could we do more if we had a fifth option, for example when the three lights are all off? No, because we run out of possibilities. We don't have any other different and unique way to arrange these two bits.
So, with two bits we can only store four pieces of information. What of one bit? Well, we can only store just one piece of information. That's why a bit is the smallest unit of information in computing. And the more bits you use, the larger the size of information you can store in them.