- To multiply decimal numbers, multiply them as if there were no decimal points, and then put as many decimal digits in the answer as there are total in the factors.

*understand*why we have such a rule - where does it come from?

Understanding the rule for decimal multiplication is actually fairly simple, because it comes from fraction multiplication. But, I will propose here a little different way of explaining all this.

First, look over this decimal multiplication lesson that is taken from Math Mammoth Decimals 2 book.

It talks about how 0.4 × 45 is like taking 4/10 part of 45. The same applies if you have 0.4 × 0.9 - you can think of it as taking 4/10 part of 0.9.

Can you see now why the answer to 0.4 × 0.9 has to be

*smaller*than 0.9?

Or, turn it around: 0.9 × 0.4 is taking 9/10 of 0.4, and so the answer has to be

*smaller than*0.4 (slightly smaller).

Thinking this way, it shouldn't be a big surprise that 0.9 × 0.4 equals 0.36. (The student needs to have a solid grasp of decimal place value prior to this so he can immediately see that 0.36 is smaller than 0.4.)

Now, once your student is comfortable with this idea (as explained in the lesson), then you can proceed on with the explanation based on fraction multiplication. See, we're taking it one step at a time!

**Comparing fraction multiplication and decimal multiplication**

(I have not yet written a lesson about this for my books, but will do so for the Light Blue 5-B.)

Remember, decimals are fractions.

Let's take an easy example first.

0.5 × 0.7 is solved with fractions like this:Notice the denominators 10 and 10 got multiplied to produce the denominator 100 for the answer, and so the answer written as a decimal has two decimal digits.

(5/10) × (7/10) = 35/100 = 0.35

Another example:

0.384 × 2.91

= (384/1000) × (291/100)

= (384 × 291) / (1000 × 100)

= 111744 / 100000

= 1.11744

The denominators 1000 and 100 have as many zeros as as you have decimal digits in the number. The denominator of the answer is 100,000 — with 5 zeros — so the answer as a decimal has five decimal digits.

One more time:

0.45 × 1.3

= (45/100) × (13/10)

= (45 × 13) / (100 × 10)

= 585 / 1000

= 0.585

So... when you write decimals as fractions, the denominators are

**powers of ten that have as many zeros as there are decimal digits in the decimal number**. When you multiply, those denominators get multiplied, and you get another power of ten that has as many zeros as there were in the factors. That, in turn, translates being a decimal number with as many decimal digits as there were decimal digits in the factors.

(In case you don't know: powers of ten are the numbers 10

^{1}, 10

^{2}, 10

^{3}, 10

^{4}, 10

^{5}, and so on. Written without the exponential notation these are 10; 100; 1000; 10,000; 100,000; and so on.)