[ACCEPTED]-Why interpreted langs are mostly ducktyped while compiled have strong typing?-interpreted-language
The premises behind the question are a bit dodgy. It is not true that interpreted languages 95 are mostly ducktyped. It is not true that 94 compiled languages mostly have strong typing. The 93 type system is a property of a language. Compiled versus interpreted is a property of an implementation.
The programming language Scheme 92 is dynamically typed (aka duck-typed), and 91 it has many dozens of interpreted implementations, but 90 also some fine native-code compilers including 89 Larceny, Gambit, and PLT Scheme (which includes 88 both an interpreter and a JIT compiler making 87 seamless transitions).
The programming language 86 Haskell is statically typed; the two most 85 famous implementations are the interpreter 84 HUGS and the compiler GHC. There are several 83 other honorable implementations split about 82 evenly between compiling to native code 81 (yhc) and interpretation (Helium).
The programming 80 language Standard ML is statically typed, and 79 it has had many native-code compilers, of 78 which one of the best and most actively 77 maintained is MLton, but one of the most useful 76 implementations is the interpreter Moscow 75 ML
The programming language Objective Caml 74 is statically typed. It comes with only 73 one implementation (from INRIA in France) but 72 this implementation includes both an interpreter 71 and a native-code compiler.
The programming 70 language Pascal is statically typed, but 69 it became popular in the 1970s because of 68 the excellent implementation built at UCSD, which 67 was based on a P-code interpreter. In later 66 years fine native-code compilers became 65 available, such as the IBM Pascal/VS compiler 64 for the 370 series of computers.
The programming 63 language C is statically typed, and today 62 almost all implementations are compiled, but 61 in the 1980s those of us lucky enough to 60 be using Saber C were using an interpreter.
Nevertheless 59 there is some truth behind your question, so you deserve a more thoughtful answer. The 58 truth is that dynamically typed languages do seem to be correlated with interpreted implementations. Why might that be?
Many 57 new languages are defined by an implementation. It 56 is easier to build an interpreter than to 55 build a compiler. It is easier to check 54 types dynamically than to check them statically. And 53 if you are writing an interpreter, there 52 is little performance benefit to static 51 type-checking.
Unless you are creating 50 or adapting a very flexible polymorphic 49 type system, a static type system is likely 48 to get in the programmer's way. But if 47 you are writing an interpreter, one reason 46 may be to create a small, lightweight implementation 45 that stays out of the programmer's way.
In 44 some interpreted languages, many fundamental operations are so expensive that the additional overhead of checking types at run time doesn't matter. A good example 43 is PostScript: if you're going to run off 42 and rasterize Bezier curves at the drop 41 of a hat, you won't balk at checking a type 40 tag here or there.
Incidentally, please be wary of the terms "strong" and "weak" typing, because 39 they don't have a universally agreed technical 38 meaning. By contrast, static typing means that programs 37 are checked before being executed, and a program might be rejected before 36 it starts. Dynamic typing means that the types of values are checked during execution, and 35 a poorly typed operation might cause the 34 program to halt or otherwise signal an error at run time. A primary 33 reason for static typing is to rule out 32 programs that might have such "dynamic 31 type errors". (This is another reason 30 people who write interpreters are often 29 less interested in static typing; execution 28 happens immediately after type checking, so 27 the distinction and the nature of the guarantee 26 aren't as obvious.)
Strong typing generally means that 25 there are no loopholes in the type system, whereas weak typing means 24 the type system can be subverted (invalidating 23 any guarantees). The terms are often used 22 incorrectly to mean static and dynamic typing. To 21 see the difference, think of C: the language 20 is type-checked at compile time (static 19 typing), but there are plenty of loopholes; you 18 can pretty much cast a value of any type 17 to another type of the same size---in particular, you 16 can cast pointer types freely. Pascal was 15 a language that was intended to be strongly 14 typed but famously had an unforeseen loophole: a 13 variant record with no tag.
Implementations 12 of strongly typed languages often acquire 11 loopholes over time, usually so that part 10 of the run-time system can be implemented 9 in the high-level language. For example, Objective 8 Caml has a function called
Obj.magic which has the 7 run-time effect of simply returning its 6 argument, but at compile time it converts 5 a value of any type to one of any other 4 type. My favorite example is Modula-3, whose 3 designers called their type-casting construct 2
Static vs dynamic is the language.
Compiled vs interpreted is the implementation.
In principle the two choices can be and are made orthogonally, but for sound 1 technical reasons dynamic typing frequently correlates with interpretation.
The reason that you do early binding (strong 12 typing) is performance. With early binding, you 11 find the location of the method at compile 10 time, so that at run time it already knows 9 where it lives.
However, with late binding, you 8 have to go searching for a method that seems 7 like the method that the client code called. And 6 of course, with many, many method calls 5 in a program, that's what makes dynamic 4 languages 'slow'.
But sure, you could create 3 a statically compiled language that does 2 late binding, which would negate many of 1 the advantages of static compilation.
Because compiled languages need to take 35 the amount of memory used in account when 34 they are compiled.
When you see something 33 like:
var a = 10; // a is probably a four byte int here a = "hello world"; // now a is a 12 byte char array
There is a lot that happens between 26 those two lines. The interpreter deletes 25 the memory at a, allocates the new buffer 24 for the chars, then assigns the a var to 23 point to that new memory. In a strongly 22 typed language, there is no interpreter 21 that manages that for you and thus the compiler 20 must write instructions that take into account 19 type.
int a = 10; // we now have four bytes on the stack. a = "hello world"; // wtf? we cant push 12 bytes into a four byte variable! Throw an error!
So the compilers stops that code from 18 compiling so the CPU dosn't blindly write 17 12 bytes into a four byte buffer and cause 16 misery.
The added overhead for a compiler 15 writing extra instructions to take care 14 of type would slow down the language significantly 13 and remove the benefit of languages like 12 C++.
EDIT in response to comment
I don't know much about Python, so 11 I can't say much about that. But loosely 10 typedness slows down runtime considerably. Each 9 instruction that the interpreter (VM) calls 8 has to evaulate, and if necessary, coerce 7 the var into the expected type. If you have:
mov a, 10 mov b, "34" div a, b
Then 6 the interpreter has to make sure that a 5 is a variable and a number, then it would 4 have to coerce b into a number before processing 3 the instruction. Add that overhead for every 2 instruction that the VM executes and you 1 have a mess on your hands :)
It's pretty much because people who write 28 and use interpreted languages tend to prefer 27 ducktyping, and people who develop and use 26 compiled languages prefer strong explicit 25 typing. (I think the concensus reason for 24 this would be somewhere in the area of 90% for 23 error prevention, and 10% for performance.) For 22 most programs written today, the speed difference 21 would be insignificant. Microsoft Word has 20 run on p-code (uncompiled) for - what - 15 19 years now?
The best case in point I can think 18 of. Classical Visual Basic (VB6/VBA/etc.) The 17 same program could be written in VB and 16 run with identical results and comparable 15 speed either compiled or interpreted. FUrthermore, you 14 have the option of type declarations (in 13 fact variable declarations) or not. Most 12 people preferred type declarations, usually 11 for error prevention. I've never heard or 10 read anywhere to use type declarations for 9 speed. And this goes back at least as far 8 as two powers of magnitude in hardware speed 7 and capacity.
There are basically two reasons to use static 16 typing over duck typing:
- Static error checking.
If you have an interpreted 15 language, then there's no compile time for 14 static error checking to take place. There 13 goes one advantage. Furthermore, if you 12 already have the overhead of the interpreter, then 11 the language is already not going to be 10 used for anything performance critical, so 9 the performance argument becomes irrelevant. This 8 explains why statically typed interpreted 7 languages are rare.
Going the other way, duck 6 typing can be emulated to a large degree 5 in statically typed languages, without totally 4 giving up the benefits of static typing. This 3 can be done via any of the following:
- Templates. In this case, if the type you instantiate your template with supports all the methods called from within the template, your code compiles and works. Otherwise it gives a compile time error. This is sort of like compile-time duck typing.
- Reflection. You try to invoke a method by name, and it either works or throws an exception.
- Tagged unions. These are basically container classes for other types that contain some memory space and a field describing the type currently contained. These are used for things like algebraic types. When a method is invoked, it either works or throws, depending on whether the type currently contained supports it.
This 2 explains why there are few dynamically typed, compiled 1 languages.
I'm guessing that languages with dynamic 4 (duck) typing employ lazy evaluation, which 3 is favored by lazy programmers, and lazy 2 programmers don't like to write compilers 1 ;-)
Languages with weak typing can be compiled, for 15 example, Perl5 and most versions of Lisp 14 are compiled languages. However, the performance 13 benefits of compiling are often lost because 12 much of the work that the language runtime 11 has to perform is to do with determining 10 what type a dynamic variable really has 9 at a particular time.
Take for example the 8 following code in Perl:
$x=1; $x="hello"; print $x;
It is obviously pretty 7 difficult for the compiler to determine 6 what type $x really has at a given point 5 in time. At the time of the print statement, work 4 needs to be done to figure that out. In 3 a statically typed language, the type is 2 fully known so performance at runtime can 1 be increased.
In a compiled language, one system 9 (the compiler) gets to see all the code 8 required to do strong typing. Interpreters 7 generally only see a tiny bit of the program 6 at a time, and so can't do that sort of 5 cross checking.
But this isn't a hard and 4 fast rule - it would be quite possible to 3 make a strongly typed interpreted language, but 2 that would go against the sort of "loose" general 1 feel of interpreted languages.
Some languages are meant to run perfect 15 in non-exceptional conditions and that's 14 sacrificed for horrible performance they 13 run into during exceptional conditions, hence 12 very strong typing. Others were just meant 11 to balance it with additional processing.
At 10 times, there's way more in play than just 9 typing. Take ActionScript for instance. 3.0 8 introduced stronger typing, but then again 7 ECMAScript enables you to modify classes 6 as you see fit at runtime and ActionScript 5 has support for dynamic classes. Very neat, but 4 the fact they're stating that dynamic classes 3 should not be used in "standard" builds 2 means it's a no-no for when you need to 1 play it safe.
More Related questions