开发者

How to effectively use git repositories / submodules for a C++ product that has many dependencies?

开发者 https://www.devze.com 2023-04-12 14:46 出处:网络
I\'m very new to Git and still figuring things out...I think I\'m finally understanding the whole branching/merging aspects.But I\'m still not sure what the best solution for handling project dependen

I'm very new to Git and still figuring things out... I think I'm finally understanding the whole branching/merging aspects. But I'm still not sure what the best solution for handling project dependencies are. What is best practice? This has got to be a common problem and yet I can't find a good tutorial or best practice on doing this.

Suppose I have a C++ product that depends on several other C++ libraries, ultimately making up a c开发者_运维百科omplicated dependency graph. Libraries like: other internally developed C++ libraries, public open source libraries, off-the-shelf closed source libraries

The final C++ product's source code relies on the output of its dependencies in order to compile. These outputs are composed of:

  • A series of C++ header files (notice that the C++ implementation files are absent)
  • A set of compiled binaries (LIB files, DLL files, EXE files, etc)

My understanding is I should put each library its own repository. Then it sounds like Git's submodules are mostly what we are looking for. The write-up at http://chrisjean.com/2009/04/20/git-submodules-adding-using-removing-and-updating/ in particular seems like a good introduction and I can almost understand. For example, I could have my master project repository refer to a specific external Git repository as a submodule / dependency. C++ code can "#include" header files in the appropriate submodule directories. A build script included with the master product / repository could conceivably proceed to recursively compile all submodules.

OK now the question:

How do you typically cache binaries for each repository? Some of our dependencies take hours to compile and aren't updated on a very frequent basis. With the above scheme, I might clone / check out a high-level project from the server to fix a small bug. Now as I understand it, I'm also forced to clone all the thousands of files that make up each of these open source dependencies - I'm worried that could take some time (especially on Windows). Even worse, wouldn't I then be forced recompile each and every submodule, even if nobody has changed that submodule for months? (It seems like some kind of local "hash table" scheme on each developer computer that links a changeset ID to a set of compiled binaries would be handy...)

(A previous shop I worked at a few years ago used Mercurial - the extent of , but all code - internal projects, etc. was rolled into one single big giant repository, and you had to build everything in a big fat monolithic build script when cloning a newly-created branch from the server. When we were done with the fix / new feature and had merged back with upstream, we deleted the local repository for that particular branch.)

We're doing development on Windows, but will eventually branch out to other non-Microsoft platforms - so portability is important.


Normally this is a bad idea, but why don't you check the binaries into the submodules as well as the compiled code for submodules that don't change often? That way, the fetch will pull down the bins, and when you compile a new version of a dependency with changed binaries, you will see the binaries show up in the git status output.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号