[solved] Do you need all bits to represent a byte?

In my “computer technology” class, they’re teaching us basic binary.

I keep seeing this:
000001112
being:
1112

I understand that if you wanted to write a byte “101” and “111” together it would be
000001012 000001112

My question is, can you write a byte without having the 0s before the actual “useful” part? Or does it mean something else?

(Excuse me if I’m wrong on something, the only teacher I met was a substitute and only taught us the basics, not binary math or anything. In fact, he sometimes wrote it with all the bits, and sometimes just 4 or five. He was kind of inconsistent…)

Can you write 014 and still mean 14?

Yes.
Technically you can have any number of leading zeros in any base, and it doesn’t affect the number.

However, note that if the number is in two’s complement (aka signed) then the most significant bit does matter, it’s the sign bit. This is because binary doesn’t have a negative sign (can’t write -014). But one’s complement, the zeros don’t matter.

EDIT: actually, not sure on that terminology, I should say that the zeros don’t matter with unsigned integers, too sleepy to go verify on one’s complement etc.

I’d say no, because a byte implies a word length (8 in general). You can write binary/bits 111, but that’s not a byte.

My bad, I meant would 1012 in a byte represent 000001012 or something else. Thanks anyway!

It depends. What do the three bits represent? Yes that’s it for unsigned. And it’s 111111012 in signed 2’s complement.

Typically a word will refer to the native bit size of a register. But not so clear with modern 64-bit CPUs.

[quote=“Ecumene,post:1,topic:50883”]
Wall of text to clarify ambiguous terminology, sorry … :slight_smile:

A byte just means a group of 8 bits (binary digits). It isn’t necessarily a number - it’s a good idea to get the two things separate in your mind. Bytes are commonly used for other things as well as representing numbers. E.g. as well as the numbers from 0 to 255 a byte might hold 256 unique combos of bits for 256 distinct enemy types, or the 8 bits can be used individually for 8 boolean (i.e. true or false) flags.

Just like decimal digits need not represent numbers. E.g. a group of 4 like “1234” could be a ATM code, or a group of 6 like “555123” a local phone.

So in answer to your literal question, no you can’t really write a byte without leading zeros because a byte actually means 8 bits. If you wrote 4 it would be a “nybble” (terrible word!) But to answer the question you may have meant, see BurntPizza above. Leading zeros are not meaningful when writing a number in any base for a human to read.

I can remember when I was taught binary it took me a while to separate the two subjects in my head of
(a) binary numbers in general as a math concept, and
(b) how computers use their little chunks of bits and bytes to perform binary arithmetic.
I guess it’s natural to teach them together because binary math isn’t used anywhere else except in computers, but I remember finding it confusing for a while till I got it clear they are distinct.

[quote=""]
So to be clear, talking of twos-complement is part of subject (b) … i.e. it’s how a computer represents negative numbers using only "1"s and "0"s. In human written language … i.e. subject (a) … something like “minus 15”, i.e decimal -15 would be written in binary as -1111. When the class looks into subject (b) in more detail they’ll come across not only twos-complement as a way of representing negatives, but the beauties of endianism too. I envy you, lucky students ;D