I do not know how many objects will be present so I am reading while there are no exceptions. From what Google says this is not possible. I was wondering if anyone knows a way?
Переведено автоматически
Ответ 1
Here's the trick: subclass ObjectOutputStream and override the writeStreamHeader method:
@Override protectedvoidwriteStreamHeader()throws IOException { // do not write a header, but reset: // this line added after another question // showed a problem with the original reset(); }
}
To use it, just check whether the history file exists or not and instantiate either this appendable stream (in case the file exists = we append = we don't want a header) or the original stream (in case the file does not exist = we need a header).
Edit
I wasn't happy with the first naming of the class. This one's better: it describes the 'what it's for' rather then the 'how it's done'
Edit
Changed the name once more, to clarify, that this stream is only for appending to an existing file. It can't be used to create a new file with object data.
Edit
Added a call to reset() after this question showed that the original version that just overrode writeStreamHeader to be a no-op could under some conditions create a stream that couldn't be read.
Ответ 2
As the API says, the ObjectOutputStream constructor writes the serialization stream header to the underlying stream. And this header is expected to be only once, in the beginning of the file. So calling
newObjectOutputStream(fos);
multiple times on the FileOutputStream that refers to the same file will write the header multiple times and corrupt the file.
Ответ 3
Because of the precise format of the serialized file, appending will indeed corrupt it. You have to write all objects to the file as part of the same stream, or else it will crash when it reads the stream metadata when it's expecting an object.
The easiest way to avoid this problem is to keep the OutputStream open when you write the data, instead of closing it after each object. Calling reset() might be advisable to avoid a memory leak.
The alternative would be to read the file as a series of consecutive ObjectInputStreams as well. But this requires you to keep count how many bytes you read (this can be implementd with a FilterInputStream), then close the InputStream, open it again, skip that many bytes and only then wrap it in an ObjectInputStream().