Can you have a class inheritance tree with more than two generations? More than a base class and a derived class
Inheritance tree
Yes.
Are you looking for some in-depth explanation of an aspect of the topic, or did you just want somebody to tell you that before you try it yourself…?
Neither. Just a shallow and brief explanation, if that's not asking too much.
My project`s facebook page is “DreamLand Page”
With respect…
A question like this is easily answered by today's AI Language Models.
Especially ‘shallow and brief'. Here is a cut and paste from google.
In C++, there's no strict limit on the depth of inheritance, but there are practical considerations:
Factors Influencing the Depth:
Memory Overhead:
Each level of inheritance adds some memory overhead due to the inclusion of base class members.
Code Complexity:
Deep inheritance hierarchies can lead to complex code that's hard to understand and maintain.
Performance:
Virtual function calls, commonly used in inheritance, can have a slight performance impact compared to direct function calls.
Best Practices:
Favor Composition over Inheritance: If possible, consider using composition to achieve code reuse instead of deep inheritance hierarchies.
Keep Inheritance Hierarchies Shallow: Aim for a few levels of inheritance to keep your code manageable.
Use Virtual Inheritance Carefully: Use virtual inheritance to avoid diamond problem, but be mindful of the performance implications.
Dev careful. Pixel on board.
Also subtly wrong on several ways. It MAY have some memory overhead, or may not, only if the base class requires it, the performance impact likely isn't what you're expecting, especially as the cost of virtual dispatch is optimized away by modern cores, and inheritance done well often reduces complexity relative to the non-inheritance versions. Generally attempting your own approach has a higher performance cost than the compiler's perfectly optimized version of virtual dispatch, and alternatives to inheritance tend to require more complex solutions than inheritance.
Like so many other features, using what the compiler gives you has a cost but trying to implement it yourself generally costs more.
The AI version is similar to yet distinctly different from the truth.
I think I just got frobbed.
Okay let's measure it.
#include <iostream>
#include <chrono>
#include <vector>
class Base {
public:
virtual void doSomething() {
// Do some basic operation
int sum = 0;
for (int i = 0; i < 1000; ++i) {
sum += i;
}
}
};
class Derived : public Base {
public:
void doSomething() override {
// Do the same basic operation
int sum = 0;
for (int i = 0; i < 1000; ++i) {
sum += i;
}
}
};
class Derived_L2 : public Derived {
public:
void doSomething() override {
int sum = 0;
for (int i = 0; i < 1000; ++i) {
sum += i;
}
}
};
int main() {
const int numIterations = 1000000;
std::vector<Base*> baseVec(numIterations);
std::vector<Derived*> derivedVec(numIterations);
std::vector<Derived_L2*> derived_L2Vec(numIterations);
for (int i = 0; i < numIterations; ++i) {
baseVec[i] = new Base();
derivedVec[i] = new Derived();
derived_L2Vec[i] = new Derived_L2();
}
// Test Base speed
auto start = std::chrono::high_resolution_clock::now();
for (int i = 0; i < numIterations; ++i) {
baseVec[i]->doSomething();
}
auto end = std::chrono::high_resolution_clock::now();
std::cout << "Base time: "
<< std::chrono::duration_cast<std::chrono::microseconds>(end - start).count()
<< " microseconds" << std::endl;
// Test Derived speed
start = std::chrono::high_resolution_clock::now();
for (int i = 0; i < numIterations; ++i) {
derivedVec[i]->doSomething();
}
end = std::chrono::high_resolution_clock::now();
std::cout << "Derived time: "
<< std::chrono::duration_cast<std::chrono::microseconds>(end - start).count()
<< " microseconds" << std::endl;
// Test Derived_L2 speed;
start = std::chrono::high_resolution_clock::now();
for (int i = 0; i < numIterations; ++i) {
derived_L2Vec[i]->doSomething();
}
end = std::chrono::high_resolution_clock::now();
std::cout << "Derived_L2 time: "
<< std::chrono::duration_cast<std::chrono::microseconds>(end - start).count()
<< " microseconds" << std::endl;
// Cleanup
for (int i = 0; i < numIterations; ++i) {
delete baseVec[i];
delete derivedVec[i];
delete derived_L2Vec[i];
}
}
// Result:
// Base time : 8056 microseconds, second run 6367 microseconds
// Derived time: 9130 microseconds, second run 7479 microseconds
// Derived_L2 time : 7700 microseconds, second run 5934 microseconds
Damn, that's actually 'kinda neat. 🙂
Would be interesting to know why the first derived suffers consistently.
Dev careful. Pixel on board.
The times are large enough to be hit bt OS thread scheduling. At the smaller scale, the micro-benchmarking will be more about the core's prediction than the loop as everything fits in a cache, the virtual dispatch as well fits in the cache. As it is also simple addition it is going to be hammering the core's notification for committed uOps, as each iteration depends on the result of the previous, so the OoO core is mostly dealing with notification that a register variable is updated and ready for the next iteration.
You are not benchmarking what you think you are.
Inheritance is a great solution to the problem, and the compilers and CPUs are well-optimized to handle it. You can have quite complex inheritance trees in C++, including multiple inheritance and virtual inheritance. Many languages allow inheriting multiple interfaces (pure abstract classes) in addition to a concrete base class.