Skip to content
Deftkit

Number Base Converter — Binary, Octal, Decimal, Hex

Convert numbers between binary, octal, decimal, and hexadecimal instantly. Live bidirectional conversion with explanations — free, runs in your browser.

Input base
Try a preset
All bases
Binary
0b1111 1111
Octal
0o377
Decimal
255
Hex
0xFF
Bit width: 8 bits
Byte width: 1 byte
Fits in: uint8 / int8

Supports arbitrary-precision integers via BigInt — no 2⁵³ overflow. Handles negative numbers and the standard 0b/0o/0x prefixes. All conversion happens in your browser.

What a number base actually is

A number base (also called a radix) is the size of the alphabet you use to write numbers. Base-10 uses 10 digits (0–9). Base-2 uses 2 digits (0 and 1). Base-16 uses 16 digits (0–9 and A–F). The mathematical value is identical — the number of apples in a basket does not change when you write it in binary — but the representation changes, and different representations make different operations easier.

This tool converts integers between the four bases developers actually use: binary (2), octal (8), decimal (10), and hexadecimal (16). Type a value in any base, see it in all four simultaneously, with copy-ready output and bit-width information.

When each base matters

Decimal (base-10) — the human default

The base your brain is wired for. Use it when a number is primarily read by humans: user-facing prices, counts, timestamps, percentages. Also the default for math libraries, SQL literals, and most configuration values. If you're not sure which base to use, the answer is decimal.

Binary (base-2) — the machine's native language

Every computer, under every abstraction, runs on binary. Use it when you need to reason about individual bits: bitmasks, feature flags, permission bits, network subnet masks, hardware register values. The classic example is a Unix file permission like chmod 755 — the 7 and 5 are octal but the underlying meaning is three sets of three bits: 111 101 101 (read+write+execute for owner, read+execute for group and other).

Binary gets unwieldy fast — 32 bits is already a challenge to read. This is why developers reach for hex most of the time instead.

Hexadecimal (base-16) — compact binary

Hex is binary's shorthand. Every hex digit represents exactly 4 bits (called a nibble). That means you can translate hex to binary by substituting each digit with its 4-bit equivalent, with no arithmetic required: 0xFF = 1111 1111, 0xA3 = 1010 0011. This is why hex dominates the low-level world:

  • Memory addresses: 0x7fff5fbff8a0 is far more readable than the 48-bit binary equivalent
  • Color codes: #FF6B35 = red 255, green 107, blue 53. Each pair is one byte
  • Unicode code points: U+1F600 (grinning face emoji), U+00E9 (é)
  • UUID/GUID: every hex digit represents 4 bits of the 128-bit identifier
  • MAC addresses, SHA hashes, JWT headers — anywhere bytes need to be written compactly

Octal (base-8) — mostly for Unix permissions

Octal was popular on early computers with 12, 24, or 36-bit words (PDP-10, PDP-11) where 3-bit groups divided evenly. Today it survives almost exclusively in one place: Unix file permissions. The classic chmod 755 script.sh uses octal because each permission triplet (read=4, write=2, execute=1) fits exactly in one octal digit. Almost everywhere else, octal is a historical curiosity. In JavaScript, the leading-zero octal literal (017) is a famous footgun and is deprecated in strict mode — use 0o17 instead.

How to use this tool

  1. Pick the input base (binary, octal, decimal, or hex)
  2. Type a value in that base. Prefixes are optional and auto-stripped: 0b10101010, 0o755, 0xFF. Underscores and whitespace as digit separators are also allowed: 1010_1010
  3. See the result in all four bases at once, with one-click copy for each
  4. Click a preset chip for common values: byte max (255), 1 KB (1024), port max (65535), common file permissions
  5. Toggle digit grouping to display binary in nibbles (0110 1010) and hex in bytes (FF 7A) for readability
  6. Check the bit width / byte width / fits in badges below — at a glance, see whether your value fits in uint8, uint16, uint32, uint64, or needs BigInt
  7. Switch the input base while you have a valid value and the tool converts the input for you — no re-typing

BigInt, not float

JavaScript's Number type is a 64-bit float, which means integer precision breaks down above 2⁵³ (9,007,199,254,740,991). This tool uses BigInt throughout, so numbers up to any size convert cleanly — useful for cryptographic values, 128-bit UUIDs, and arbitrary-precision math. Try pasting a 40-character hex string; it converts to 160-bit decimal and binary without losing a digit.

Common bugs and gotchas

  • Leading-zero octal in JavaScript (legacy): parseInt("017") returns 15 in non-strict mode (interpreted as octal) but 17 in strict mode. Always pass an explicit radix: parseInt(s, 10). This bug has caused real production incidents
  • Hex color confusion: #FFF expands to #FFFFFF (repeating each digit), not #000FFF. The 3-digit form is a CSS shorthand, not a general hex rule
  • Signed vs unsigned interpretation: the hex value 0xFFcan mean 255 (unsigned byte) or -1 (signed byte in two's complement). Your language decides which. C/C++ uninitialized memory that shows 0xCCCCCCCC is a famous sentinel
  • Endianness: the hex value 0x12345678 is stored in memory as the byte sequence 78 56 34 12 on x86 (little-endian) but 12 34 56 78 on network wire format (big-endian). This tool shows the abstract numeric value; endianness matters when you read raw bytes
  • Leading zeros and bit width: 0b11 and 0b00000011 are the same number (3), but different bit widths. When interfacing with fixed-width protocols, the leading zeros matter — always specify the width explicitly
  • Negative binary: this tool displays negative numbers with a minus sign (−5 → -101), not in two's complement. Two's complement representation depends on the target bit width (8, 16, 32, 64 bit) — use a fixed-width bit manipulator if you need that specific encoding

Bitmask math, the everyday use case

Bitmasks are the most common reason to convert between hex and binary. Say you're debugging a feature-flags value:

flags = 0x2D

// In binary (4 bits per hex digit):
// 0x2D = 0010 1101

// Reading bit-by-bit (right to left):
// bit 0 (value 1)  = 1  → FEATURE_LOGIN  enabled
// bit 1 (value 2)  = 0  → FEATURE_SEARCH disabled
// bit 2 (value 4)  = 1  → FEATURE_EXPORT enabled
// bit 3 (value 8)  = 1  → FEATURE_ADMIN  enabled
// bit 4 (value 16) = 0  → FEATURE_BETA   disabled
// bit 5 (value 32) = 1  → FEATURE_DARK   enabled

Being able to flip between hex and binary in two seconds turns "what does this flag value mean" from a 5-minute calculator exercise into a glance.

Privacy

Base conversion runs entirely in your browser via native BigInt arithmetic. No number you enter is sent to a server. Safe for sensitive values: license keys, permission masks, memory dumps, cryptographic constants.

Frequently asked questions

Why hex instead of binary for color codes?

A 24-bit RGB color in binary would be 11111111 01101011 00110101 — readable but long. In hex it's #FF6B35 — 6 characters, trivially split into red/green/blue pairs, unambiguously pronounceable ("eff-eff six-bee thirty-five"). Hex is the compact notation that still reveals each byte at a glance.

What's the biggest number this tool handles?

No practical limit. The tool uses JavaScript's BigInt, which supports integers of arbitrary size (limited only by available memory). A 1000-digit decimal integer converts to a 3322-digit binary in a few milliseconds.

Does this tool handle fractional numbers (like 3.14)?

No — integers only. Fractional base conversion is a different problem with different edge cases (repeating binary expansions, floating-point precision). Almost every developer use case for base conversion involves integers, so this tool stays focused.

What's the difference between "0x" and "#"?

Both prefixes mark hex values, but they come from different conventions. 0x is the C language convention used in C/C++/Java/Python/JavaScript and most programming contexts. # is the CSS/HTML convention for color codes. They mean the same thing; the character differs because HTML needed to reserve 0x for other uses. Use whichever matches your target context.

Why are some hex digits letters?

Base-16 needs 16 different symbols. We already have 10 digits (0–9) but need 6 more. The convention is to use A–F for the values 10–15. A=10, B=11, C=12, D=13, E=14, F=15. Case is not significant (FF = ff) but convention varies by context: C programmers lean lowercase, assembly listings often uppercase, CSS color codes are case-insensitive.

How do I convert negative numbers in two's complement?

This tool displays negative numbers with a minus sign ("-5" → -101), which is the mathematically correct representation. For two's complement(how CPUs actually store negative integers), you need to specify a bit width. Example: -5 in 8-bit two's complement is 11111011 (= 251 when read as unsigned). The encoding depends on width, which is out of scope here. A dedicated bit-manipulation tool with explicit width selection is a good fit for that task.

Is my data sent anywhere?

No. All conversion runs via JavaScript's native BigInt in your browser. No network calls, no analytics on the values you convert, no logging.