0
該程序爲什麼顯示以下輸出?爲什麼std :: bitset <8>變量無法處理11111111?
#include <bitset>
#include <cstdio>
#include <iostream>
int main()
{
std::bitset<8> b1(01100100); std::cout<<b1<<std::endl;
std::bitset<8> b2(11111111); std::cout<<b2<<std::endl; //see, this variable
//has been assigned
//the value 11111111
//whereas, during
//execution, it takes
//the value 11000111.
//Same is the case with b1
std::cout << "b1 & b2: " << (b1 & b2) << '\n';
std::cout << "b1 | b2: " << (b1 | b2) << '\n';
std::cout << "b1^b2: " << (b1^b2) << '\n';
getchar();
return 0;
}
這是輸出:
01000000
11000111
b1 & b2: 01000000
b1 | b2: 11000111
b1^b2: 10000111
首先,我想有一些錯誤的頭文件(我是用MinGW的),所以我 使用MSVCC檢查。但它也顯示了同樣的事情。請幫忙!
或者,如果他是熟悉的十六進制都:'的std :: bitset的<8> B1(0x64)'和'std :: bitset <8> b2(0xff)'應該工作。而從任何4個二進制位到十六進制值的轉換都是手動完成的。 – KChaloux 2013-04-08 13:03:49