When you have a data set this large you can try creating a custom registration frame.
You have 487 light frames so go through each night and pick 3 subs at the beginning middle and end of the night. You then use the mosaic merge tool in Star Alignment to merge these together. I prefer to use the frame adaptation setting for this. Alternatively you can use register/ union-separate on each sub and use dynamic alignment to combine them.
When you have a mosaic image created from each night you can combine these in star alignment and then either register them into an image or again use mosaic merge in star alignment or dynamic alignment to combine all 3. The final goal here is to have a master reference frame that has not bee cropped to 1 image. It should show all of the different subtle offset you had from re framing and dithering each night. You will register all of your images to this master frame and then integrate them in image integration.
I highly suggest using the linear fit clipping rejection algorithm along with large scale pixel rejection. I probably sound like a broken record because of the past few posts, but use a very small ROI box on an opened image and run image integration. See if you are able to notice a difference in your back ground. After an integration the high clipped pixel map will be filled with hot pixels and the low clipping map will contain the overlapping edges of your frames, you can fine tune this. Large scale pixel rejection will also play a role in how your edges overlap, you cna adjust these to taste as well depending on how much signal loss or gain may have occured in an overlapping area. Its good to crank up the buffer and stack size when you are using an ROI, assuming you have enough ram and a fast processor, be sure to set them back to normal when you are happy with the results and ready to stack everything at once. I suggest you sepnd a long time with ROI in different areas of an image before you finally integrate.
Also, when you have the final integrated image, are you linear fitting the data first?
For dslr especially with this large of an integration you should be splitting the channels, running statistics on each channel at 14bit with normalization off to find the lowest median
now you use the linear fit tool to fit the lowest median channel to the other two channels, and finally recombine the channels using recombination.
Now you can do background neutralization with a small sampled area and determine if further gradient removal is needed. I have much better results with ABE than DBE with thick ifn areas, For me the best solution is to try and have ABE apply very subtle improvements to any gradients so that it does not over correct and darken areas where it suspects a flaw. When you have a good subtle setting dialed in you can run multiple iterations of abe until you have the data where it should be. In your example with excessive noise issues, I seem to notice that abe does this alot, it just seems to be the nature of this tool. I try to avoid using background gradient removal if at all possible.
I shot this region using DSLR about a year ago using the aforementioned technique.
https://www.flickr.com/photos/lmnosunsetdeluxe/33193121913/in/dateposted/