[ art / civ / cult / cyb / diy / drg / feels / layer / lit / λ / q / r / sci / sec / tech / w / zzz ] archive provided by lainchan.jp

lainchan archive - /sec/ - 2



File: 1492328028861-0.png (82.6 KB, 300x225, Room_641A_exterior.jpg)

No.2

This is the Simple Security Questions thread for simple questions.

If you have a simple question and a suitable thread doesn't already exist, just post it here and someone will probably try to answer it for you.

Remember to do some research before asking your question. No one wants to answer a question that a simple search can already resolve.

  No.3

what are lainon's favorite resources for learning various forms of safeSecs?

I know that one Lainon was getting a team together to create a guide to good opsec practices.

I know there are plenty of good resources to practice various forms of hacking, and in so doing, apply that towards your own practices and netsec.

but i'm curious what resources people have used to increase their knowledge of various *secs, and if they'd be willing to share them. (i'd start a /sec/ pdf thread, if i had any to offer.)

  No.13

Does anybody knows about a good call spoofing service? (if its free, go ahead with it)
I'd want to improve my social engineering skills

  No.19

Does anyone know how much Android base phones home, without gapps? I have a DuraXE which is an Android phone masquerading as a dumbphone. I think it's just using the Android kernel and base. As far as I can tell, none of the stock apps are there.

I'll be using just to make calls and to occasionally tether to a Linux laptop with a vpn. But do you think I should try to get into the hosts file and block Google domains, to try to keep it from data collecting?

  No.20

File: 1492362825786.png (30.83 KB, 200x60, android-cve.png)


  No.21

>>13

1) arduino board x 1
2) arduino gsm shield x 2
3) simcard x 10 - infinity
4) build or find a multiple sim adaptor x 1 or 2.
5) setup sim-cards on shield 1 with adaptor for making calls from.
6) setup shield 2 with sim-cards to dial into (option adaptor shield)
7) connect everything up software & forward all calls from shield 2 to the multi sim shield for making calls.
8) solar power & battery life long
9) place in remote location anywhere far away from you
10) call into your box using any burner phone
11) happy social exploiting

sim-card adaptors allow for multiple sim-cards to be used for sending (forwarding you calls) & receiving (accepting your calls). so you are not dialling in & out on the same numbers.

i know my response does not contain links to call spoofing services.

  No.22

Is PIA a good option for a vpn?

  No.23


  No.24

>>22
Absolutely not. They're at the forefront of sponsored recommendations, and their privacy practices are fairly antithetical to the purpose behind a VPN.

You get what you pay for. A decent VPN will cost. I recommend NordVPN, and have considered Mullvad though I haven't used it personally. After some small social engineering in the form of an incomplete sign up, I have a code for two years of NordVPN for $79, can share for the interested. Using a VPN running through a decent server (i.e., Switzerland) will take a toll on your speed, though.

https://thatoneprivacysite.net/vpn-comparison-chart/ is a good reference.

  No.25

>>24
damn, I had no idea about this whole 14 eyes thing.

  No.29

>>24
friendly reminder that nordvpns key got leaked
https://gist.github.com/kennwhite/1f3bc4d889b02b35d8aa

>22

use tor without a vpn if at all possible
if you simply must have a vpn
mullvad has a good track record (it's slow but that's the price you pay)
else algovpn + streisand and bulletproof hosting.

  No.37

Hey Lainons, what are some good and up to date hacker/infosec zines?

  No.42

>>37
I strongly recommend the Phrack Magazine. I think many of the lainons has already knew this one, right?

http://phrack.org/

https://en.wikipedia.org/wiki/Phrack

First published in 1985, it has became an online underground zine, the frontier of the hacker culture. This is where legendary articles like Hacker Manifesto, Smashing The Stack For Fun And Profit were published. There are both technical-oriented and spiritual/philosophical-oriented articles, the latter plays an important role and formed what we know as hacker culture.

The only problem is we have less and less old-school hackers, and the update of the articles is really slow, but it is a must-read for anyone who's interested in hacker culture.

  No.43

>>37
PoC or GTFO is pretty great. A lot of the articles are seriously technical though, and take a fair bit of getting your head round.

https://www.pocogtfo.com/

  No.45

>>43
Thanks for mentioning. Personally I think this is one of the few zines that inherited underground hacking culture...

Also don't forget to read their piece of sarcasm on regression analysis.

http://openwall.info/wiki/people/solar/pocorgtfo

  No.64

>>29
What's wrong with using VPN + Tor? Also I'm not sure if selfhosting a VPN is a good idea. Especially for people who seed torrents alot.

  No.65

>>64
There's nothing wrong with using VPN + Tor, if anything it's better to do that then what >>29 says.
Multiple Letter Agencies can see _when_ you're using Tor if you're not using a VPN or something like meek: https://blog.torproject.org/blog/how-use-%E2%80%9Cmeek%E2%80%9D-pluggable-transport

  No.66

>>22

There's somebody who made a spreadsheet with most of the VPN providers, their pros and cons.

Check it out

https://thatoneprivacysite.net/vpn-comparison-chart/

  No.69

I would like some recommendations for a good anonymous email service

  No.70

>>69
If you use Tor, cock.li allow you to do a anonymous account. Just remember to not put any information linkable to you. Ideally:
>generate a random string
  # head -n 1 < /dev/urandom | base64 
>get some of the characters and fill the account registration

Of course, it does not mean cock.li is secure.
You should use pgp, reop[1] or codecrypt[2] before send the massages. You, host your own server (pay with bitcoin and access only via tor).

[1] http://www.tedunangst.com/flak/post/reop
[2] https://github.com/exaexa/codecrypt

  No.71

>>64
>What's wrong with using VPN + Tor?

If you use VPN > Tor (in that order) there's no problem. It's a good practice if you're using IPsec (but not if using PPTP or TLS).
Now, if you're using Tor > VPN (in that order), then that's bad. The main thing about Tor is that it changes in a non deterministic way it's exit nodes. If you use a VPN as you end point (the point that will actually connect with the server you're trying to GET), then people can build a profile about your traffic activity, since you have a unique IP adress (the VPN).
Should only use VPN > Tor. That's my opinion, at least. Some people will disagree.

  No.76

>>64
https://www.torproject.org/docs/faq.html.en#IsTorLikeAVPN

>>71
>If you use VPN > Tor (in that order) there's no problem.
IMO there could be, if you want to hide the fact that you are using Tor, your ISP could just use DPI and found what you are using Tor.

>Now, if you're using Tor > VPN (in that order), then that's bad.

Indeed it is, I use it for some websites that block Tor however. I just download a free VPN in the whonix ws, do my things and then create a new workstation although it is not that necessary.

  No.77

>>76
Yep. Well said.
But, I'm curious: why are you using whonix?
It's much simpler to just point all your traffic to localhost and allow TorDNS. You'll not leak nothing. All these things about Whonix, and Qubes and Tails seems kinda idiot to me (except some good bits, like randomized mac address by default). Virtualization doesn't mean more security. It mean "security done the bloat way".

  No.78

>>77
>But, I'm curious: why are you using whonix?
I use it just to experiment, but it is really more secure. Unless an attacker manage to get out of the VM or attacks the gateway your IP is safe, someone with root privileges can bypass a firewall.

>All these things about Whonix, and Qubes and Tails seems kinda idiot to me

Why do you think so? There are great advantages in some of those.

>Virtualization doesn't mean more security. It mean "security done the bloat way".

Virtualization really means that it is more difficult for someone to get full control over your computer.

  No.80

>>78
Virtualization is the bad way to do security.
Let's get the Matryoshka doll analogy: you want something to be safe. Will you add more dolls to cover the layer 0 (that's where your secret stuff is) or do you want only one doll, but with much stronger security?
You can still say that you want more dolls, but you're not considering one factor: the materials cost.
When you put too many dolls there's too much effort to do these dolls and to maintain their security and it turns to be simple to get an exploit when you have so many "materials" (code).
The right way to do it is to fortify the layer 0 with better materials, not more materials (in this analogy it would be the equivalent to use sandboxes), not to put more, entire, dolls.

You can also check this thread:
http://marc.info/?l=openbsd-misc&m=119318909016582&w=2

Together with the above said, the x86 virtualization is not a real virtualization. Different from other architectures like SPARC64 that had a special chip for virtualization, x86 runs virtualization on microcode. Since microcode is absolutely close source, you don't know what the fuarrrk is going on.

>There are great advantages in some of those.


I can see only one advantage these, and it's on Tails: the non persistence of changes.
But, you can do that with other systems too, especially the runs that use ramdisk.

  No.82

File: 1492492344800-0.png (25.88 KB, 185x200, authorization.png)

File: 1492492344800-1.png (344.7 KB, 212x300, x86_harmful.pdf)

>>77
>All these things about Whonix, and Qubes and Tails seems kinda idiot to me

Yes, it's kinda idiotic to put the system/applications in a virtual machine, rather than actually harden them.

In fact, we do have underlying groundwork, such as the famed PaX/Grsecurity kernel, and recent plagiarism adoption by KSPP is a great improvement of security; userspace/compiler hardening is essential also, we already have NX, stack-protector-strong, RELRO, D_FORTIFY_SOURCE, PIC/PIE. A sandbox/virtual machine can't provide these protections, we need more of them, and we need to adopt them to systems. And systemic audits are always needed.

But please understand, even with these hardening, sandboxing/virtual machine is a requirement if higher degree of security is needed, in order to restrict the damage of an exploit if it has happened. Instead of a single point of defense, we need defense in depth.

Also, a PDF reader code execution can do everything a user can. Given how our operating systems, applications are designed, there is really no better way to achieve privilege separation and reasonable level of security, especially on the desktop system.

Yes, if the applications and operating systems adopt a different approach to security, it will be the elegant the completely solution, but we still need FireFox to browser the web and use a Unix-like system in a foreseeable future, that is what virtualization is coming for.

This xkcd comic #1200 elaborates this problem well.

Also, an attacker can do a lot of things, as everyone knows, it can implant a rootkit, but what less known is, they can also insert a permanent hardware (firmware-level) malware that almost nobody can detected because it starts before any OS kernels or even bootloaders. And all of these can be done remotely, by software exploits alone. So once you have been hacked, booting from Tails CD could still gives you a compromised system. There is no effectively countermeasures, at least on x86, besides virtualization.

You really need to check the "x86 considered harmful" article by a main Qube dev.

  No.83

>>69
This question has been asked a thousand times on all imageboards, and on Lainchan I think we have seen this question three times. We really need to put a list of recommendation together as FAQ.

But I tell you a secret I discovered occasionally: if you want to gather a list of different privacy-focused email services, you can subscribe to [tor-talk] and [tor-dev] mailing list and watch the sender addresses, and you'll find some email services you never heard before even you think you are well-informed. XD

  No.85

>>82
I don't know... I understand your argument, but I think reinforce my previous one.

>restrict the damage of an exploit


That's why we need privsep and sandboxing. I do not disagree with the usage of privsep mechanisms, but I do disagree with the estimated 'security' people think virtualization will bring.
If you need to restric the damage, don't throw thousands of LOC above the main system. That's why there is capsicum and pledge.

>Instead of a single point of defense, we need defense in depth.


My point is that, it's not depth. It's an illusion. If you put many lines of code, that can hardly be audited, above everything else, how is this going to help? Using a simples sandbox would be better, or, use better software that is simple enough to spot all the high risk bugs and that has privsep by default.

>a PDF reader code execution can do everything a user can


That's a issue for the pdf software, isn't it? Just use something better, like mupdf without the JS module and without the networking module, or, for the most paranoid, convert pdf to images then open it.

>there is really no better way to achieve privilege separation and reasonable level of security, especially on the desktop system


http://man.openbsd.org/pledge.2

>we still need FireFox to browse


I, personally, don't. I use Links most of the time (on ion3 wm for tabs). But, I understand what you're trying to say...

>And all of these can be done remotely


Well, if you find some proof please send it on misc@openbsd.org
It is possible to reflash your firmware with malicious code, but not remotely, unless you find some really nasty vulnerability.
Also, firmware malware is not the worst. The worst is hardware trojans, that is the manipulation of the microarchtecture itself to defeat some functionality of the processor. That's way worst scenario than firmware or system vulnerability, since no one can verify that (there is some academic tests, but not really).

>You really need to check the "x86 considered harmful" article by a main Qube dev.


I'll check that, thanks.


I can see myself support Qubes in the future if they go with seL4 VM, then that would be some interesting soykaf (even though it would still be susceptible to microcode attacks).

  No.86

Mod reading: there's any possibility to move the following comments to a new thread on itself? I think that's a interesting topic to be on this thread:

>>77
>>78
>>80
>>82
>>85

  No.92

What are the underlying assumptions that make code exploitation possible? What exactly is happening when one tries to exploit, say, a black box? What kind of systems make this difficult? What languages make certain exploits impossible, how about operating systems and/or machines?

I hope to be pointed to papers and books, thank you.

  No.93

>>92

>What are the underlying assumptions that make code exploitation possible?


Programming mistakes, if you want to be brief and general. Though not all programming mistakes leads to code exploits (should be obvious), and not all code exploit comes from programming mistakes (think deliberate backdoor).

>What exactly is happening when one tries to exploit, say, a black box?


Well, you feed it input and see how it behaves.

More precisely, you can in most cases guess what the language the block box is implemented in, and then use that to your advantage.

For example, if your looking for an RCE in a website, you can usually see if the website runs PHP, Python, Ruby, or whatever. If it is running PHP, and you want to see if it is vulnerable to unsafe deserialization (one vector from which RCE can be achieved), it makes no sense to send "pickled" (ie. python serialized) code.

For a C program its much the same. Say you suspect a C program of being vulnerable to a buffer overflow. You can then input a very long string of A's while running the program in a debugger. You might see that it crashes with something like "invalid instruction 0x41414141", which indicates that your string of A's wrote to the executable part of the program (meaning you could substitute the A's for valid shellcode and gain code execution in that way).

>What kind of systems make this difficult?


Sandboxing a program mitigates the risk of a successful code exploit. That is why you analyze malware in a virtual machine.

>What languages make certain exploits impossible [...]


Memory safe languages (Eg. Python, Ruby) doesn't have the memory related exploits that C programs have. However, it is important to note that there's a difference in the design spec of a language, and the implementation. For example Ruby is designed to be memory safe (AFAIK), but its interpreter is usually implemented in C, so the same kinds of bugs might still arise. See for example this guys bug reports

https://hackerone.com/raydot?sort_type=latest_disclosable_activity_at&filter=type%3Aall%20from%3Araydot&page=1&range=forever

It is also possible to instrument a compiler with control flow analysis for the sake of assuring that the program doesn't go somewhere its not supposed to.

Also, most programs are compiled with non-executable stack, so the above probably doesn't work. That still doesn't prevent ROP-based exploits

https://en.wikipedia.org/wiki/Return-oriented_programming

I'm not aware of a (useful) language that is 100% safe from every kind of known code exploit, in the presence of programming mistakes.

>[...] how about operating systems and/or machines?


Again, sandboxing helps mitigate the impact of an exploit. Something like Qubes OS might be interesting for you to look at (I think, I'm not that familiar with it).

Note that even sandboxes can be broken out of (obviously --- they have also been coded by someone). A fun example is the recent DirtyCOW exploit, which someone showed could be used to escape from a docker container (granted, docker is not a security oriented "sandbox" as far as I'm aware):

https://blog.paranoidsoftware.com/dirty-cow-cve-2016-5195-docker-container-escape/


Your question is very general, so it's hard point you to anywhere concrete.

  No.95

>>92
>What are the underlying assumptions that make code exploitation possible?
Too many. There's a great number of possible ways to exploit a software. For example, buffer overflow happens when you set a cache that's bigger than you actually needed. This makes it possible to retrieve those bits not used from memory and get things like passwords. But there's many, can't cite everything.

>What exactly is happening when one tries to exploit, say, a black box?


What do you mean by a "black box"? Do you mean an airgapped computer?
If that's the case, there's many ways to exploit an airgapped computer, such as from a physical media (usb stick, cd-rom, swapping HD's), from firmware reflashing (BIOS, Wifi firmware), from EM waves generated from this computer (attack called TEMPEST), from power source (power analysis), even from sounds that the computer do (accoustic analysis exploit).

>What kind of systems make this difficult?


Systems that are designed to be secure by default, such as OpenBSD and seL4. Open source firmware also helps security, such as Coreboot.
For side channel attacks: faraday cage (TEMPEST), use battery (power analysis), make fake vibrations (accoustic analysis).

>What languages make certain exploits impossible


I assume you're talking about programming languages, right? Any language with formal proof and formal verification will make some exploits impossible (if it's proofs are down to assembly - if not, the compiler can introduce bugs). Languages with dependent-types such as Idris and ATS have formal proofs inside the language itself. Languages that is closer to logics also is less prone to exploits, such as Twelf, Lambda-prolog and functional languages, such as SML.


>how about operating systems and/or machines?


For OS, see above. About machines, for now we have some open hardware with closed ISA, such as Sabrelite (i.MX.6 processor) and some x86 with open firmware (such as Thinkpad's x60, T60 and x200). For real security we'll have to wait for RISC-V and, better, LowRISC.

>I hope to be pointed to papers and books, thank you.


too tired, just type on searx.me

  No.117

>>70

Use -c instead -n if you want data from urandom.

  $ head -c 10 /dev/urandom | base64 

will give you 10 bytes from /dev/urandom and base64 encode them.

  No.118

>>86
There's currently no good way to do this, no. Besides, it's not in good form to start a thread with a message very clearly written as a reply.

  No.119

>>86
Repost them.

  No.120

>>118
>>119
ok, I'll repost it, thanks.

  No.121

>>117
Very good, thanks to refine it.

  No.205

Is it a good idea to use IPSec within your LAN network if your goal is to achieve integrity and encryption between hosts?

  No.211

>>205
>Is it a good idea to use IPSec within your LAN network if your goal is to achieve integrity and encryption between hosts?
I've never personally set up IPSec but I think it's a good idea if you want to restrict the devices that are allowed to be a part of your LAN and to authenticate them beyond just the MAC address. There's also no reason why you can't use additional encryption over-top of it since IPSec operates at the Internet layer, not the transport layer (i.e. you can use OpenVPN/SSH/etc. with it). I'd say go for it. I'd love to do it but my router is a bit too space-constrained at the moment to support StrongSwan.

  No.224

I'm probably being too paranoid, but I'm going to be taking my first flight soon and I was thinking of possibly taking my laptop on the trip. How bad are TSA searches of laptops? I've already got whole disk encryption on it.

  No.225

>>224
If you refuse to decrypt, you don't get in, if you happen to be searched. Unless you're a citizen.

https://www.eff.org/wp/digital-privacy-us-border-2017

  No.226

>>225
I should have mentioned that. I'm a US citizen. I just don't want to be searched and forced to enter a key and refused boarding for the flight I paid for if I don't.