Wintellect Blogs

Do PDB Files Affect Performance?

8 Mar , 2014  

After a detour into Historical Debugging, it’s time to come back to return to answering questions about PDB files. Here’s a question from Justin:

Thanks for the great post once again. I was looking forward to your debugging virtual training, but unfortunately it was cancelled.

The company I work for is pushing pack against building release mode binaries with debug information generated, one of the reasons I signed up for the class :). They are afraid performance will be affected.

My question is what are the best command line arguments for generating symbols in release mode? Also is there somewhere I can reference to show that there should be no performance hits.

I’m sorry about the canceled class, but the good news is that Mastering .NET Debugging was rescheduled to July 14-15.

The executive summary answer: no, generating PDB files will have no impact on performance whatsoever. As for references that I can point you too, I haven’t found any on the web that answer the exact question so let me take both .NET and native development in turn.

Recently, the always-readable Eric Lippert wrote a great post What Does the Optimize Switch Do? where he discusses the optimizations done by the compiler and Just in Time (JIT) compiler. (Basically, you can sum it up as the JITter does all the real optimization work.) There’s a bit of confusion on the C# and VB.NET compiler switches around as there are four different /debug switches, /debug, /debug+, /debug:full, and /debug:pdb-only. I contributed to that confusion because I thought /debug:pdb-only did something different that was better for release builds than the other three /debug switches.

All four switches all do the same thing in that they cause a PDB file to be generated but why are there four switches to do the same thing? Do Microsoft developers really love parsing slightly different command line options? The real reason: history. Back in .NET 1.0 there were differences, but in .NET 2.0 there isn’t. It looks like .NET 4.0 will follow the same pattern. After double-checking with the CLR Debugging Team, there is no difference at all.

What controls whether the JITter does a debug build is the /optimize switch. Building with /optimize- will add an attribute, DebuggableAttribute, in the assembly and setting the DebuggingMode parameter to DisableOptimizations. It doesn’t take a Rhodes Scholar to figure out that DisableOptimizations does exactly what it says.

The bottom line is that you want to build your release builds with /optimize+ and any of the /debug switches so you can debug with source code. Read the Visual Studio documentation to see how where to set those switches in the different types of projects.

It’s easy to prove these are the optimal switches. Taking my Paraffin program, I compiled one build with /optimize+ and /debug, and another with just /optimize+.

which is the same as /debug+ and /debug, and the other with /optimize+ /debug:pdbonly to show the differences, which is the root of how we got it wrong. After compiling, I used ILDASM with the following command line to get the raw information from the binaries

ILDASM /out=Paraffin.IL Paraffin.exe

Using a diff tool you’ll see that the IL itself is identical between both builds. The main difference will be in the DebuggableAttribute declaration for the assembly. When built /optimize+ and a /debug switch, a DebuggingMode.IgnoreSequencePoints is passed to the DebuggableAttribute to tell the JIT compiler that it doesn’t need to load the PDB file in order to correctly JIT the IL. A value of DebuggingMode.Default is also OR’d in, but that value is ignored.

Like .NET, building PDB files has nothing to do with optimizations so have zero impact on the performance of an application. If you have a manager who in Justin’s words is “afraid performance will be affected” here’s what I tell them.  (Sadly, I’ve run into a few more managers who say that than I care to count).

That might be true on other operating systems, but not Windows. If you think they do, then why does Microsoft build every single product they ship with PDB files turned on for both debug and release builds? They wrote the compiler, they wrote the linker, and they wrote the operating system so they know exactly what the effects are. Microsoft has more people focused on performance than any other software company in the world. If there were any performance impact at all, they wouldn’t do it. Period. Performance isn’t the only thing at Microsoft, it’s everything.

Where .NET is pretty simple as there’s really only two switches, the appropriate optimization switches are dependent on many individual application factors. What I can tell you is what the switches you need to set to generate PDB files correctly in release builds.

For CL.EXE, the compiler, you need to add /Zi to have it put debugging symbols into the .OBJ file. For LINK.EXE, the linker, you need to specify three options. The first is /DEBUG, that tells the linker to generate a PDB file. However, that switch also tells the linker that this is a debug build. That’s not so good because that will affect the performance of your binary. Basically what happens when you use /DEBUG is the linker links faster because it no longer looks for individual references. If you use one function from a OBJ the linker throws the whole OBJ into the output binary so you now have a bunch of dead functions.

To tell the linker you want only the referenced functions, you need to add /OPT:REF as the second switch. The third switch is /OPT:ICF, which enabled COMDAT folding. There’s a term you don’t hear every day. Basically what this means is that when generating the binary, the linker will look for functions that have identical code and only generate one function but make multiple symbols point to the one function.

If you want to test the difference yourself on a native binary to see what affects generating PDB files will have, it’s nearly as easy as a .NET binary. Visual Studio comes with a nice little program, DUMPBIN, which can tell you more than you ever wanted to know about a Portable Executable file. Run it with /DISASM switch to get the disassembly of a binary.

Please keep those PDB related questions coming. Of course, if you have any other questions, I’ll be happy to take a crack at those also. Gee, I better draw the line: no investment or relationship questions. <grin>

  • Anonymous

    Pretty sure that gcc is the same as CL in this respect – use of -g is orthogonal to use of -O{whatever}. Certainly we’ve been relying on it for years to get high fidelity static symbol information from STABS records for monitoring embedded systems (where optimisations are a must – our hardware isn’t quick!). And a quick look at code compiled with/without -g indicates that’s true (at lest for a small sample!) – after this

    gcc -o a-no-debug.o -O2 a.cpp
    gcc -g -o a-debug.o -O2 a.cpp
    strip a-no-debug.o
    strip a-debug.o

    ‘diff -s –binary a-no-debug.o a-debug.o’ says the files are identical.

  • To take this up a level, people need to think through their requirements. Whatโ€™s more important? Systems with less bugs and systems that can be effectively supported in production or systems that run slightly faster?

  • Anonymous

    Great post John! One thing that I occassionally use is the .ini file described at:
    to dynamically control whether the jitter will optimize or not.

  • jrobbins


    Thanks. I appreciate you showing the same applies to GCC. Debugging symbols are always good!


    Absolutely! It’s so sad how many times I’ve run into this complaint about creating PDB files. It’s almost like the managers *want* to fail.


    Good point. The INI trick tells the JIT compiler to always turn off all optimizations. I should do a blog entry on that file and the COMPLUS_ZapDisable trick.

    -John Robbins

  • Anonymous

    There actually was performance penalty in v1 of .NET when compiling with debug symbols.
    The attribute you mentioned DebuggableAttribute(bool isJITTrackingEnabled, bool isJITOptimizaterDisabled)
    the first flag was true when generating debug symbols.
    That attribute caused extra info to be generated and more memory was allocated.
    This is no longer used with v2 and higher.

  • jrobbins


    You’re exactly right. I thought that was still the case in .NET 2.0 and it’s not. I didn’t want to get too much into the differences between .NET 1.0 and 2.0 as I doubt anyone is still doing .NET 1.0 any more.

    -John Robbins

  • Web How to Easily Create a JavaScript Framework, Part 3 ProtoFish – advanced hover menu based on Prototype

  • WebHowtoEasilyCreateaJavaScriptFramework,Part3ProtoFish-advancedhovermenubase…

  • Anonymous


    Thanks for the post. That’s exactly the kind of information I’ve been looking for.

    I do have a question as well. We have a native project for which VS invokes the Microsoft Build utility to do the compilation for our release builds. According to the Microsoft website, “If you build your binaries with the Build utility, the utility will create full symbol files.” Using your articles on source servers and symbol servers, I source indexed the PDBs that were generated and placed them in a symbol server. To test it all, I intentionally inserted some code that caused a bug check and a KERNEL memory dump. When I open the crash dump in windbg, it finds the PDBs on the symbol server and retrieves the source from our version control system. All good so far. However, when I issue the dv command, I get an error message:

    “Unable to enumberate locals. HRESULT 0x80004005
    Private symbols (symbols.pri) are required for locals.”

    If the Build utility generated full symbol files, why can’t the debugger show me the local variable values? Are “full” symbol files not really full?

    I’ve searched high & low, and posted to MSDN forums, but nobody seems to know the answer. Since Google results on similar topics usually have your name associated with them, I thought I would ask you.

    Thanks again for your postings and articles on these topics.

  • jrobbins


    1. If you have a symbol server set up and are downloading the operating system symbols from Microsoft, none of the symbols they post have private symbols in them.
    2. I’m going to go on a limb and guess that you’re doing a driver and are using BUILD.EXE (and not MSBUILD.EXE). It’s been a while since I have done driver stuff, but you need to look at the build output and see what flags are getting passed to CL.EXE (the compiler) and LINK.EXE (the linker). You need to have /Zi for CL.EXE and /DEBUG /OPT:REF, and /OPT:ICF for LINK.EXE. Make sure they are not doing a /PDBSTRIPPED for LINK.EXE as that will create symbols like Microsoft gives out on the public symbol server.
    3. Make sure you are using .frame to move into your code on the stack. If you are at the top of the stack, you are probably inside Microsoft code, hence the message. Once you’ve done the .frame to your code, the dv command should work like you expect.

    Hope it helps!
    -John Robbins

  • Anonymous

    Good post. I have a question.

    We have a symbols server set up for both Silverlight and WPF apps. For the release builds of both we use /debug:pdbonly /optimize+

    From the docs [] it states:

    “Specifying pdbonly allows source code debugging when the program is started in the debugger but will only display assembler when the running program is attached to the debugger.”

    This confirms with my results. In our desktop solution, when I debug from Visual Studio (F5) I get symbols and source. However, when I do the same thing in our Silverlight project, with a ASP.NET startup project, I don’t get symbols (I do with the debug build). I think this is because under the covers ASP.NET is attaching to the WebServer, which confirms what the docs state.

    I have read [] about the .ini file which is supposed to allow you to debug when attaching to a program.

    How can I debug a Silverlight Application build in release? Do I need to put the .ini files in the xap?

  • Anonymous

    in breve’s link above (pointing to MSDN) microsoft says:

    If you use “/debug:full”, be aware that there is some impact on the speed and size of JIT optimized code and a small impact on code quality with “/debug:full”.
    We recommend “/debug:pdbonly” or no PDB for generating “release” code.

  • jrobbins

    Breeve & Phillipg,

    The MSDN documentation is out of date and wrong. ๐Ÿ™‚

    I personally double and triple checked with the compiler and debugger guys at Microsoft to ensure there was no difference in the different build switches. They assured me there was none with .NET 2.0 and higher. The documentation is still refering to .NET 1.0/1.1.

    Hope it helps!
    – John Robbins

  • Anonymous

    How come \debug+ \debug:pdbonly optimize- is optimizing the code at runtime in .net 3.5

  • jrobbins


    Because a lot of the work for the optimizer is done through the JIT compiler.

    – John Robbins

  • Anonymous

    Well, these are your own words:

    “The main difference will be in the DebuggableAttribute declaration for the assembly. When built /optimize+ and a /debug switch, a DebuggingMode.IgnoreSequencePoints is passed to the DebuggableAttribute to tell the JIT compiler that it doesn’t need to load the PDB file in order to correctly JIT the IL”

    Shouldn’t “load the PDB file” affect JIT performance?

  • Thanks for the great information. As a hail mary I am going to link a question I just posted on stackoverflow. I have searched a lot to answer this question and am stuck. Thus, any insight you could provide is much appreciated.

  • Mike Socha III

    Ahh beautifully and authoritatively spoken for a change. Thank you so much!!

  • erikj999

    Hi John,

    Thanks for this article – it’s really helpful. I have a question about whether loading PDB files could significantly impact memory consumption. We have a huge (6500 assemblies) application hosted in IIS, so we are wondering what memory overhead loading the PDBs automatically would add. It’s difficult to test because our assemblies are loaded on demand. Just curious if you’ve ever looked into the memory question.