iStream 未完全恢复已放入 Stringstream 的内容

istream not fully recovering what has been put to stringstream

本文关键字:Stringstream 完全恢复 iStream      更新时间:2023-10-16

>我使用以下设置:

#include <bits/stdc++.h>
using namespace std;
class foo {
public:
void bar( istream &in, int n ) {
vector<tuple<int,int,int,int>> q;
int x,y,a,b;
for ( q.clear(); in >> x >> y >> a >> b; q.push_back(make_tuple(x,y,a,b)) );
assert( n == q.size() );
}
};
int main() {
stringstream ss;
for ( int i= 0; i < 100; ++i )
ss << rand() << " " << rand() << " " << rand() << " " << rand() << endl;
ss.clear(), ss.seekg(0,std::ios::beg);
(new foo())->bar(ss,100);
}

事实上,我的代码比这更复杂,但这个想法是我将东西(确切地说是long long int)放入stringstream并调用一个函数,将创建的stringstream作为istream对象提供。上面的例子工作正常,但在我的特殊情况下,我放了,比如说,2 mln元组。问题是数字在另一端,在foo内没有完全恢复(我得到的数字不到2000000)。你能想象这种情况可能发生的情况吗?这种in >> x >> y >> a >> b能否在输入耗尽之前以某种方式结束?

编辑:我使用了这个检查:

if ( ss.rdstate() and std::stringstream::badbit ) {
std::cerr << "Problem in putting stuff into stringstream!n";
assert( false );
}

不知何故,一切都通过了这次检查。

编辑:正如我所说,我通过使用>>方法恢复输入数字来main()内部进行健全性检查,并且确实取回了2 mln(元组)数字。 只是当stringstream对象被传递给foo时,它只恢复了一小部分数字,而不是全部。

编辑:为了它的价值,我在这里粘贴实际上下文。由于它的依赖关系,它不会编译,但至少我们将能够看到违规的行。run()方法无法恢复main()方法提供的查询。

#include <iostream>
#include <algorithm>
#include <chrono>
const unsigned long long PERIOD= 0x1full;
class ExpRunnerJSONOutput : public ExperimentRunner {
std::string answers;
void set_name( std::string x ) {
this->answers= "answerswers."+x+".txt";
}
public:
ExpRunnerJSONOutput( query_processor *p ) : ExperimentRunner(p) {
set_name(p->method_name);
}
ExperimentRunner *setProcessor( query_processor *p) override {
ExperimentRunner::setProcessor(p);
set_name(p->method_name);
return this;
}
// in: the stream of queries
// out: where to write the results to
virtual void run( std::istream &in, std::ostream &out ) override {
node_type x,y;
value_type a,b;
unsigned long long i,j,rep_period= (16383+1)*2-1;
auto n= tree->size();
std::vector<std::tuple<node_type,node_type,value_type,value_type>> queries;
for ( queries.clear(); in >> x >> y >> a >> b; queries.push_back(std::make_tuple(x,y,a,b)) ) ;
value_type *results= new value_type[queries.size()], *ptr= results;
/* results are stored in JSON */
nlohmann::json sel;
long double total_elapsed_time= 0.00;
std::chrono::time_point<std::chrono::high_resolution_clock,std::chrono::nanoseconds> start, finish;
long long int nq= 0, it= 0;
start= std::chrono::high_resolution_clock::now();
int batch= 0;
for ( auto qr: queries ) {
x= std::get<0>(qr), y= std::get<1>(qr);
a= std::get<2>(qr), b= std::get<3>(qr);
auto ans= processor->count(x,y,a,b); nq+= ans, nq-= ans, ++nq, *ptr++= ans;
}
finish = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(finish-start);
total_elapsed_time= elapsed.count();
sel["avgtime_microsec"]= total_elapsed_time/nq*(1e-3);
out << sel << std::endl;
out.flush();
delete[] results;
}
~ExpRunnerJSONOutput() final {}
};
void runall( std::istream &in, char *res_file, ExpRunnerJSONOutput *er ) {
in.clear(), in.seekg(0,std::ios::beg);
std::string results_file= std::string(res_file);
std::ofstream out;
try {
out.open(results_file,std::ios::app);
}
catch ( std::exception &e ) {
throw e;
}
er->run(in,out), out.close();
}
using instant= std::chrono::time_point<std::chrono::steady_clock,std::chrono::nanoseconds>;
void sanity_check( std::istream &in, size_type nq ) {
node_type x,y;
value_type a,b;
size_type r= 0;
for ( ;in >> x >> y >> a >> b; ++r ) ;
assert( r == nq );
}
int main( int argc, char **argv ) {
if ( argc < 5 ) {
fprintf(stderr,"usage: ./<this_executable_name> <dataset_name> <num_queries> <result_file> K");
fflush(stderr);
return 1;
}
query_processor *processor;
std::string dataset_name= std::string(argv[1]);
auto num_queries= std::strtol(argv[2],nullptr,10);
auto K= std::strtol(argv[4],nullptr,10);
std::ifstream in;
std::ofstream logs;
try {
in.open(dataset_name+".puu");
logs.open(dataset_name+".log");
} catch ( std::exception &e ) {
throw e;
}
std::string s; in >> s;
std::vector<pq_types::value_type> w;
w.clear();
pq_types::value_type maxw= 0;
for ( auto l= 0; l < s.size()/2; ++l ) {
value_type entry;
in >> entry;
w.emplace_back(entry);
maxw= std::max(maxw,entry);
}
in.close();
const rlim_t kStackSize= s.size()*2;
struct rlimit r1{};
int result= getrlimit(RLIMIT_STACK,&r1);
if ( result == 0 ) {
if ( r1.rlim_cur < kStackSize ) {
r1.rlim_cur= kStackSize;
result= setrlimit(RLIMIT_STACK,&r1);
if ( result != 0 ) {
logs << "setrlimit returned result = " << result << std::endl;
assert( false );
}
}
}
logs << "stack limit successfully set" << std::endl;
instant start, finish;
remove(argv[3]);
auto sz= s.size()/2;
random1d_interval_generator<> rig(0,sz-1), wrig(0,maxw);
auto node_queries= rig(num_queries), weight_queries= wrig(num_queries,K);
assert( node_queries.size() == num_queries );
assert( weight_queries.size() == num_queries );
std::stringstream ss;
ss.clear(), ss.seekg(0,std::ios::beg);
for ( int i= 0; i < num_queries; ++i )
ss << node_queries[i].first << " " << node_queries[i].second << " " << weight_queries[i].first << " " << weight_queries[i].second << "n";
ss.clear(), ss.seekg(0,std::ios::beg);
sanity_check(ss,num_queries);
start = std::chrono::steady_clock::now();
auto *er= new ExpRunnerJSONOutput(processor= new my_processor(s,w,dataset_name));
finish = std::chrono::steady_clock::now();
logit(logs,processor,start,finish);
runall(ss,argv[3],er), delete processor;
logs.close();
return 0;
}

编辑:我想知道这是否与ifstream.eof()有关 - 在真正的结束之前到达文件的末尾 现在,如何确认假设 - 一旦我们到达值为26的字节,读数就会停止?

编辑:再更新一次。看完foo里面的东西,rdstate()4fail() == 1eof() == 0回来。因此,显然尚未到达文件末尾。

您没有检查流的状态。您可以容纳多少有一个上限 - 基本上是最大字符串大小。这个问题有详细的讨论

在写入字符串流时检查错误?

stringstream ss;
for (int i = 0; i < 100000000; ++i) //or some other massive number?
{
ss << rand() << " " << rand() << " " << rand() << "  " << rand() << endl;
if (ss.rdstate() & stringstream::badbit)
std::cerr << "Problem!n";
}

您可能需要检查数字的特定写入。

最终,我使用了旧的FILE *而不是istream,一切都按预期工作。出于某种原因,后者只读取文件的一部分(即其前缀),并过早停止,fail()为真。 我不知道为什么。